00:00:00.001 Started by upstream project "autotest-per-patch" build number 132815 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:19.217 The recommended git tool is: git 00:00:19.217 using credential 00000000-0000-0000-0000-000000000002 00:00:19.219 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:19.235 Fetching changes from the remote Git repository 00:00:19.238 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:19.254 Using shallow fetch with depth 1 00:00:19.254 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:19.254 > git --version # timeout=10 00:00:19.267 > git --version # 'git version 2.39.2' 00:00:19.267 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:19.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:19.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:23.844 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:23.861 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:23.882 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:23.882 > git config core.sparsecheckout # timeout=10 00:00:23.898 > git read-tree -mu HEAD # timeout=10 00:00:23.918 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:23.942 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:23.942 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:24.071 [Pipeline] Start of Pipeline 00:00:24.084 [Pipeline] library 00:00:24.086 Loading library shm_lib@master 00:00:24.086 Library shm_lib@master is cached. Copying from home. 00:00:24.100 [Pipeline] node 00:00:39.115 Still waiting to schedule task 00:00:39.116 Waiting for next available executor on ‘vagrant-vm-host’ 00:24:10.960 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_4 00:24:10.962 [Pipeline] { 00:24:10.974 [Pipeline] catchError 00:24:10.976 [Pipeline] { 00:24:10.992 [Pipeline] wrap 00:24:11.002 [Pipeline] { 00:24:11.011 [Pipeline] stage 00:24:11.013 [Pipeline] { (Prologue) 00:24:11.033 [Pipeline] echo 00:24:11.035 Node: VM-host-SM38 00:24:11.053 [Pipeline] cleanWs 00:24:11.064 [WS-CLEANUP] Deleting project workspace... 00:24:11.064 [WS-CLEANUP] Deferred wipeout is used... 00:24:11.070 [WS-CLEANUP] done 00:24:11.313 [Pipeline] setCustomBuildProperty 00:24:11.409 [Pipeline] httpRequest 00:24:11.795 [Pipeline] echo 00:24:11.797 Sorcerer 10.211.164.112 is alive 00:24:11.807 [Pipeline] retry 00:24:11.809 [Pipeline] { 00:24:11.824 [Pipeline] httpRequest 00:24:11.828 HttpMethod: GET 00:24:11.829 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:11.829 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:11.830 Response Code: HTTP/1.1 200 OK 00:24:11.830 Success: Status code 200 is in the accepted range: 200,404 00:24:11.831 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_4/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:11.975 [Pipeline] } 00:24:11.993 [Pipeline] // retry 00:24:12.000 [Pipeline] sh 00:24:12.277 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:12.296 [Pipeline] httpRequest 00:24:12.680 [Pipeline] echo 00:24:12.682 Sorcerer 10.211.164.112 is alive 00:24:12.692 [Pipeline] retry 00:24:12.693 [Pipeline] { 00:24:12.708 [Pipeline] httpRequest 00:24:12.713 HttpMethod: GET 00:24:12.713 URL: http://10.211.164.112/packages/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:12.714 Sending request to url: http://10.211.164.112/packages/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:12.716 Response Code: HTTP/1.1 200 OK 00:24:12.716 Success: Status code 200 is in the accepted range: 200,404 00:24:12.717 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_4/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:14.993 [Pipeline] } 00:24:15.012 [Pipeline] // retry 00:24:15.031 [Pipeline] sh 00:24:15.306 + tar --no-same-owner -xf spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:17.882 [Pipeline] sh 00:24:18.161 + git -C spdk log --oneline -n5 00:24:18.161 c12cb8fe3 util: add method for setting fd_group's wrapper 00:24:18.161 43c35d804 util: multi-level fd_group nesting 00:24:18.161 6336b7c5c util: keep track of nested child fd_groups 00:24:18.161 2e1d23f4b fuse_dispatcher: make header internal 00:24:18.161 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:24:18.178 [Pipeline] writeFile 00:24:18.193 [Pipeline] sh 00:24:18.470 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:24:18.482 [Pipeline] sh 00:24:18.762 + cat autorun-spdk.conf 00:24:18.762 SPDK_RUN_FUNCTIONAL_TEST=1 00:24:18.762 SPDK_TEST_NVME=1 00:24:18.762 SPDK_TEST_FTL=1 00:24:18.762 SPDK_TEST_ISAL=1 00:24:18.762 SPDK_RUN_ASAN=1 00:24:18.762 SPDK_RUN_UBSAN=1 00:24:18.762 SPDK_TEST_XNVME=1 00:24:18.762 SPDK_TEST_NVME_FDP=1 00:24:18.762 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:18.768 RUN_NIGHTLY=0 00:24:18.770 [Pipeline] } 00:24:18.784 [Pipeline] // stage 00:24:18.800 [Pipeline] stage 00:24:18.803 [Pipeline] { (Run VM) 00:24:18.816 [Pipeline] sh 00:24:19.169 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:24:19.169 + echo 'Start stage prepare_nvme.sh' 00:24:19.169 Start stage prepare_nvme.sh 00:24:19.169 + [[ -n 3 ]] 00:24:19.169 + disk_prefix=ex3 00:24:19.169 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_4 ]] 00:24:19.169 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_4/autorun-spdk.conf ]] 00:24:19.169 + source /var/jenkins/workspace/nvme-vg-autotest_4/autorun-spdk.conf 00:24:19.170 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:19.170 ++ SPDK_TEST_NVME=1 00:24:19.170 ++ SPDK_TEST_FTL=1 00:24:19.170 ++ SPDK_TEST_ISAL=1 00:24:19.170 ++ SPDK_RUN_ASAN=1 00:24:19.170 ++ SPDK_RUN_UBSAN=1 00:24:19.170 ++ SPDK_TEST_XNVME=1 00:24:19.170 ++ SPDK_TEST_NVME_FDP=1 00:24:19.170 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:19.170 ++ RUN_NIGHTLY=0 00:24:19.170 + cd /var/jenkins/workspace/nvme-vg-autotest_4 00:24:19.170 + nvme_files=() 00:24:19.170 + declare -A nvme_files 00:24:19.170 + backend_dir=/var/lib/libvirt/images/backends 00:24:19.170 + nvme_files['nvme.img']=5G 00:24:19.170 + nvme_files['nvme-cmb.img']=5G 00:24:19.170 + nvme_files['nvme-multi0.img']=4G 00:24:19.170 + nvme_files['nvme-multi1.img']=4G 00:24:19.170 + nvme_files['nvme-multi2.img']=4G 00:24:19.170 + nvme_files['nvme-openstack.img']=8G 00:24:19.170 + nvme_files['nvme-zns.img']=5G 00:24:19.170 + (( SPDK_TEST_NVME_PMR == 1 )) 00:24:19.170 + (( SPDK_TEST_FTL == 1 )) 00:24:19.170 + nvme_files["nvme-ftl.img"]=6G 00:24:19.170 + (( SPDK_TEST_NVME_FDP == 1 )) 00:24:19.170 + nvme_files["nvme-fdp.img"]=1G 00:24:19.170 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:24:19.170 + for nvme in "${!nvme_files[@]}" 00:24:19.170 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:24:19.170 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:24:19.170 + for nvme in "${!nvme_files[@]}" 00:24:19.170 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:24:19.170 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:24:19.170 + for nvme in "${!nvme_files[@]}" 00:24:19.170 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:24:19.427 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:24:19.427 + for nvme in "${!nvme_files[@]}" 00:24:19.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:24:19.427 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:24:19.427 + for nvme in "${!nvme_files[@]}" 00:24:19.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:24:19.427 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:24:19.684 + for nvme in "${!nvme_files[@]}" 00:24:19.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:24:19.685 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:24:19.685 + for nvme in "${!nvme_files[@]}" 00:24:19.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:24:19.685 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:24:19.685 + for nvme in "${!nvme_files[@]}" 00:24:19.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:24:19.685 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:24:19.685 + for nvme in "${!nvme_files[@]}" 00:24:19.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:24:19.942 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:24:19.942 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:24:19.942 + echo 'End stage prepare_nvme.sh' 00:24:19.942 End stage prepare_nvme.sh 00:24:19.953 [Pipeline] sh 00:24:20.231 + DISTRO=fedora39 00:24:20.231 + CPUS=10 00:24:20.231 + RAM=12288 00:24:20.231 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:24:20.231 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:24:20.231 00:24:20.231 DIR=/var/jenkins/workspace/nvme-vg-autotest_4/spdk/scripts/vagrant 00:24:20.231 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_4/spdk 00:24:20.231 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_4 00:24:20.231 HELP=0 00:24:20.231 DRY_RUN=0 00:24:20.231 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:24:20.231 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:24:20.231 NVME_AUTO_CREATE=0 00:24:20.231 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:24:20.231 NVME_CMB=,,,, 00:24:20.231 NVME_PMR=,,,, 00:24:20.231 NVME_ZNS=,,,, 00:24:20.231 NVME_MS=true,,,, 00:24:20.231 NVME_FDP=,,,on, 00:24:20.231 SPDK_VAGRANT_DISTRO=fedora39 00:24:20.231 SPDK_VAGRANT_VMCPU=10 00:24:20.231 SPDK_VAGRANT_VMRAM=12288 00:24:20.231 SPDK_VAGRANT_PROVIDER=libvirt 00:24:20.231 SPDK_VAGRANT_HTTP_PROXY= 00:24:20.231 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:24:20.231 SPDK_OPENSTACK_NETWORK=0 00:24:20.231 VAGRANT_PACKAGE_BOX=0 00:24:20.231 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:24:20.231 FORCE_DISTRO=true 00:24:20.231 VAGRANT_BOX_VERSION= 00:24:20.231 EXTRA_VAGRANTFILES= 00:24:20.231 NIC_MODEL=e1000 00:24:20.231 00:24:20.231 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt' 00:24:20.231 /var/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_4 00:24:22.160 Bringing machine 'default' up with 'libvirt' provider... 00:24:23.091 ==> default: Creating image (snapshot of base box volume). 00:24:23.091 ==> default: Creating domain with the following settings... 00:24:23.091 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733785563_b3164670b529af791b8f 00:24:23.091 ==> default: -- Domain type: kvm 00:24:23.091 ==> default: -- Cpus: 10 00:24:23.091 ==> default: -- Feature: acpi 00:24:23.091 ==> default: -- Feature: apic 00:24:23.091 ==> default: -- Feature: pae 00:24:23.091 ==> default: -- Memory: 12288M 00:24:23.091 ==> default: -- Memory Backing: hugepages: 00:24:23.091 ==> default: -- Management MAC: 00:24:23.091 ==> default: -- Loader: 00:24:23.091 ==> default: -- Nvram: 00:24:23.091 ==> default: -- Base box: spdk/fedora39 00:24:23.091 ==> default: -- Storage pool: default 00:24:23.091 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733785563_b3164670b529af791b8f.img (20G) 00:24:23.091 ==> default: -- Volume Cache: default 00:24:23.091 ==> default: -- Kernel: 00:24:23.091 ==> default: -- Initrd: 00:24:23.091 ==> default: -- Graphics Type: vnc 00:24:23.091 ==> default: -- Graphics Port: -1 00:24:23.091 ==> default: -- Graphics IP: 127.0.0.1 00:24:23.091 ==> default: -- Graphics Password: Not defined 00:24:23.091 ==> default: -- Video Type: cirrus 00:24:23.091 ==> default: -- Video VRAM: 9216 00:24:23.091 ==> default: -- Sound Type: 00:24:23.091 ==> default: -- Keymap: en-us 00:24:23.091 ==> default: -- TPM Path: 00:24:23.091 ==> default: -- INPUT: type=mouse, bus=ps2 00:24:23.091 ==> default: -- Command line args: 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:24:23.091 ==> default: -> value=-drive, 00:24:23.091 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:24:23.091 ==> default: -> value=-device, 00:24:23.091 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:23.091 ==> default: Creating shared folders metadata... 00:24:23.091 ==> default: Starting domain. 00:24:24.023 ==> default: Waiting for domain to get an IP address... 00:24:38.907 ==> default: Waiting for SSH to become available... 00:24:38.907 ==> default: Configuring and enabling network interfaces... 00:24:42.182 default: SSH address: 192.168.121.250:22 00:24:42.182 default: SSH username: vagrant 00:24:42.182 default: SSH auth method: private key 00:24:44.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:24:50.674 ==> default: Mounting SSHFS shared folder... 00:24:51.236 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:24:51.236 ==> default: Checking Mount.. 00:24:52.606 ==> default: Folder Successfully Mounted! 00:24:52.606 00:24:52.606 SUCCESS! 00:24:52.606 00:24:52.606 cd to /var/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt and type "vagrant ssh" to use. 00:24:52.606 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:24:52.606 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt" to destroy all trace of vm. 00:24:52.606 00:24:52.617 [Pipeline] } 00:24:52.633 [Pipeline] // stage 00:24:52.645 [Pipeline] dir 00:24:52.646 Running in /var/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt 00:24:52.648 [Pipeline] { 00:24:52.664 [Pipeline] catchError 00:24:52.665 [Pipeline] { 00:24:52.680 [Pipeline] sh 00:24:52.957 + vagrant ssh-config --host vagrant 00:24:52.957 + sed -ne '/^Host/,$p' 00:24:52.957 + tee ssh_conf 00:24:55.487 Host vagrant 00:24:55.487 HostName 192.168.121.250 00:24:55.487 User vagrant 00:24:55.487 Port 22 00:24:55.487 UserKnownHostsFile /dev/null 00:24:55.487 StrictHostKeyChecking no 00:24:55.487 PasswordAuthentication no 00:24:55.487 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:24:55.487 IdentitiesOnly yes 00:24:55.487 LogLevel FATAL 00:24:55.487 ForwardAgent yes 00:24:55.487 ForwardX11 yes 00:24:55.487 00:24:55.500 [Pipeline] withEnv 00:24:55.502 [Pipeline] { 00:24:55.516 [Pipeline] sh 00:24:55.795 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:24:55.795 source /etc/os-release 00:24:55.795 [[ -e /image.version ]] && img=$(< /image.version) 00:24:55.795 # Minimal, systemd-like check. 00:24:55.795 if [[ -e /.dockerenv ]]; then 00:24:55.795 # Clear garbage from the node'\''s name: 00:24:55.795 # agt-er_autotest_547-896 -> autotest_547-896 00:24:55.795 # $HOSTNAME is the actual container id 00:24:55.795 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:24:55.795 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:24:55.795 # We can assume this is a mount from a host where container is running, 00:24:55.795 # so fetch its hostname to easily identify the target swarm worker. 00:24:55.795 container="$(< /etc/hostname) ($agent)" 00:24:55.795 else 00:24:55.795 # Fallback 00:24:55.795 container=$agent 00:24:55.795 fi 00:24:55.795 fi 00:24:55.795 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:24:55.795 ' 00:24:55.805 [Pipeline] } 00:24:55.821 [Pipeline] // withEnv 00:24:55.829 [Pipeline] setCustomBuildProperty 00:24:55.844 [Pipeline] stage 00:24:55.846 [Pipeline] { (Tests) 00:24:55.862 [Pipeline] sh 00:24:56.143 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:24:56.156 [Pipeline] sh 00:24:56.435 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:24:56.449 [Pipeline] timeout 00:24:56.449 Timeout set to expire in 50 min 00:24:56.451 [Pipeline] { 00:24:56.466 [Pipeline] sh 00:24:56.746 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:24:57.004 HEAD is now at c12cb8fe3 util: add method for setting fd_group's wrapper 00:24:57.015 [Pipeline] sh 00:24:57.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:24:57.307 [Pipeline] sh 00:24:57.580 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:24:57.593 [Pipeline] sh 00:24:57.867 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:24:57.867 ++ readlink -f spdk_repo 00:24:57.867 + DIR_ROOT=/home/vagrant/spdk_repo 00:24:57.867 + [[ -n /home/vagrant/spdk_repo ]] 00:24:57.867 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:24:57.867 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:24:57.867 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:24:57.867 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:24:57.867 + [[ -d /home/vagrant/spdk_repo/output ]] 00:24:57.867 + [[ nvme-vg-autotest == pkgdep-* ]] 00:24:57.867 + cd /home/vagrant/spdk_repo 00:24:57.867 + source /etc/os-release 00:24:57.867 ++ NAME='Fedora Linux' 00:24:57.867 ++ VERSION='39 (Cloud Edition)' 00:24:57.867 ++ ID=fedora 00:24:57.867 ++ VERSION_ID=39 00:24:57.867 ++ VERSION_CODENAME= 00:24:57.867 ++ PLATFORM_ID=platform:f39 00:24:57.867 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:24:57.867 ++ ANSI_COLOR='0;38;2;60;110;180' 00:24:57.867 ++ LOGO=fedora-logo-icon 00:24:57.867 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:24:57.867 ++ HOME_URL=https://fedoraproject.org/ 00:24:57.867 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:24:57.867 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:24:57.867 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:24:57.867 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:24:57.867 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:24:57.867 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:24:57.867 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:24:57.867 ++ SUPPORT_END=2024-11-12 00:24:57.867 ++ VARIANT='Cloud Edition' 00:24:57.867 ++ VARIANT_ID=cloud 00:24:57.867 + uname -a 00:24:57.867 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:24:57.867 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:24:58.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:58.689 Hugepages 00:24:58.689 node hugesize free / total 00:24:58.689 node0 1048576kB 0 / 0 00:24:58.689 node0 2048kB 0 / 0 00:24:58.689 00:24:58.689 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:58.689 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:24:58.689 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:24:58.689 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:24:58.689 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:24:58.689 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:24:58.689 + rm -f /tmp/spdk-ld-path 00:24:58.689 + source autorun-spdk.conf 00:24:58.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:58.689 ++ SPDK_TEST_NVME=1 00:24:58.689 ++ SPDK_TEST_FTL=1 00:24:58.689 ++ SPDK_TEST_ISAL=1 00:24:58.689 ++ SPDK_RUN_ASAN=1 00:24:58.689 ++ SPDK_RUN_UBSAN=1 00:24:58.689 ++ SPDK_TEST_XNVME=1 00:24:58.689 ++ SPDK_TEST_NVME_FDP=1 00:24:58.689 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:58.689 ++ RUN_NIGHTLY=0 00:24:58.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:24:58.689 + [[ -n '' ]] 00:24:58.689 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:24:58.689 + for M in /var/spdk/build-*-manifest.txt 00:24:58.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:24:58.689 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:58.689 + for M in /var/spdk/build-*-manifest.txt 00:24:58.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:24:58.689 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:58.689 + for M in /var/spdk/build-*-manifest.txt 00:24:58.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:24:58.689 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:58.689 ++ uname 00:24:58.689 + [[ Linux == \L\i\n\u\x ]] 00:24:58.689 + sudo dmesg -T 00:24:58.689 + sudo dmesg --clear 00:24:58.689 + dmesg_pid=5020 00:24:58.689 + [[ Fedora Linux == FreeBSD ]] 00:24:58.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:58.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:58.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:24:58.689 + [[ -x /usr/src/fio-static/fio ]] 00:24:58.689 + sudo dmesg -Tw 00:24:58.689 + export FIO_BIN=/usr/src/fio-static/fio 00:24:58.689 + FIO_BIN=/usr/src/fio-static/fio 00:24:58.689 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:24:58.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:24:58.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:24:58.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:58.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:58.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:24:58.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:58.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:58.689 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:58.689 23:06:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:24:58.689 23:06:39 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:58.689 23:06:39 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:24:58.689 23:06:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:24:58.689 23:06:39 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:58.689 23:06:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:24:58.689 23:06:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.689 23:06:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:24:58.689 23:06:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:58.689 23:06:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.689 23:06:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.689 23:06:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.689 23:06:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.689 23:06:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.689 23:06:39 -- paths/export.sh@5 -- $ export PATH 00:24:58.689 23:06:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.689 23:06:39 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:58.689 23:06:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:24:58.689 23:06:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733785599.XXXXXX 00:24:58.689 23:06:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733785599.TC2Nyr 00:24:58.690 23:06:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:24:58.690 23:06:39 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:24:58.690 23:06:39 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:58.690 23:06:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:58.690 23:06:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:58.690 23:06:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:24:58.690 23:06:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:24:58.690 23:06:39 -- common/autotest_common.sh@10 -- $ set +x 00:24:58.948 23:06:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:24:58.948 23:06:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:24:58.948 23:06:39 -- pm/common@17 -- $ local monitor 00:24:58.948 23:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:58.948 23:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:58.948 23:06:39 -- pm/common@25 -- $ sleep 1 00:24:58.948 23:06:39 -- pm/common@21 -- $ date +%s 00:24:58.948 23:06:39 -- pm/common@21 -- $ date +%s 00:24:58.948 23:06:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785599 00:24:58.948 23:06:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785599 00:24:58.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785599_collect-vmstat.pm.log 00:24:58.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785599_collect-cpu-load.pm.log 00:24:59.881 23:06:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:24:59.881 23:06:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:24:59.881 23:06:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:24:59.881 23:06:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:59.881 23:06:40 -- spdk/autobuild.sh@16 -- $ date -u 00:24:59.881 Mon Dec 9 11:06:40 PM UTC 2024 00:24:59.881 23:06:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:24:59.881 v25.01-pre-316-gc12cb8fe3 00:24:59.881 23:06:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:24:59.881 23:06:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:24:59.881 23:06:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:24:59.881 23:06:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:24:59.881 23:06:40 -- common/autotest_common.sh@10 -- $ set +x 00:24:59.881 ************************************ 00:24:59.881 START TEST asan 00:24:59.881 ************************************ 00:24:59.881 using asan 00:24:59.881 23:06:40 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:24:59.881 00:24:59.881 real 0m0.000s 00:24:59.881 user 0m0.000s 00:24:59.881 sys 0m0.000s 00:24:59.881 23:06:40 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:24:59.881 23:06:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:24:59.881 ************************************ 00:24:59.881 END TEST asan 00:24:59.881 ************************************ 00:24:59.881 23:06:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:24:59.881 23:06:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:24:59.881 23:06:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:24:59.881 23:06:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:24:59.881 23:06:40 -- common/autotest_common.sh@10 -- $ set +x 00:24:59.881 ************************************ 00:24:59.881 START TEST ubsan 00:24:59.881 ************************************ 00:24:59.881 using ubsan 00:24:59.881 23:06:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:24:59.881 00:24:59.881 real 0m0.000s 00:24:59.881 user 0m0.000s 00:24:59.881 sys 0m0.000s 00:24:59.881 23:06:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:24:59.881 23:06:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:24:59.881 ************************************ 00:24:59.881 END TEST ubsan 00:24:59.881 ************************************ 00:24:59.881 23:06:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:24:59.881 23:06:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:24:59.881 23:06:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:24:59.881 23:06:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:24:59.881 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:59.881 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:00.446 Using 'verbs' RDMA provider 00:25:10.991 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:25:20.968 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:25:21.225 Creating mk/config.mk...done. 00:25:21.225 Creating mk/cc.flags.mk...done. 00:25:21.225 Type 'make' to build. 00:25:21.225 23:07:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:25:21.225 23:07:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:25:21.225 23:07:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:25:21.225 23:07:01 -- common/autotest_common.sh@10 -- $ set +x 00:25:21.225 ************************************ 00:25:21.225 START TEST make 00:25:21.225 ************************************ 00:25:21.225 23:07:01 make -- common/autotest_common.sh@1129 -- $ make -j10 00:25:21.482 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:25:21.482 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:25:21.482 meson setup builddir \ 00:25:21.482 -Dwith-libaio=enabled \ 00:25:21.482 -Dwith-liburing=enabled \ 00:25:21.482 -Dwith-libvfn=disabled \ 00:25:21.482 -Dwith-spdk=disabled \ 00:25:21.482 -Dexamples=false \ 00:25:21.482 -Dtests=false \ 00:25:21.482 -Dtools=false && \ 00:25:21.482 meson compile -C builddir && \ 00:25:21.482 cd -) 00:25:21.482 make[1]: Nothing to be done for 'all'. 00:25:24.008 The Meson build system 00:25:24.008 Version: 1.5.0 00:25:24.008 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:25:24.008 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:24.008 Build type: native build 00:25:24.008 Project name: xnvme 00:25:24.008 Project version: 0.7.5 00:25:24.008 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:24.008 C linker for the host machine: cc ld.bfd 2.40-14 00:25:24.008 Host machine cpu family: x86_64 00:25:24.008 Host machine cpu: x86_64 00:25:24.008 Message: host_machine.system: linux 00:25:24.008 Compiler for C supports arguments -Wno-missing-braces: YES 00:25:24.008 Compiler for C supports arguments -Wno-cast-function-type: YES 00:25:24.008 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:25:24.008 Run-time dependency threads found: YES 00:25:24.008 Has header "setupapi.h" : NO 00:25:24.008 Has header "linux/blkzoned.h" : YES 00:25:24.008 Has header "linux/blkzoned.h" : YES (cached) 00:25:24.008 Has header "libaio.h" : YES 00:25:24.008 Library aio found: YES 00:25:24.008 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:24.008 Run-time dependency liburing found: YES 2.2 00:25:24.008 Dependency libvfn skipped: feature with-libvfn disabled 00:25:24.008 Found CMake: /usr/bin/cmake (3.27.7) 00:25:24.008 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:25:24.008 Subproject spdk : skipped: feature with-spdk disabled 00:25:24.008 Run-time dependency appleframeworks found: NO (tried framework) 00:25:24.008 Run-time dependency appleframeworks found: NO (tried framework) 00:25:24.008 Library rt found: YES 00:25:24.008 Checking for function "clock_gettime" with dependency -lrt: YES 00:25:24.008 Configuring xnvme_config.h using configuration 00:25:24.008 Configuring xnvme.spec using configuration 00:25:24.008 Run-time dependency bash-completion found: YES 2.11 00:25:24.008 Message: Bash-completions: /usr/share/bash-completion/completions 00:25:24.008 Program cp found: YES (/usr/bin/cp) 00:25:24.008 Build targets in project: 3 00:25:24.008 00:25:24.008 xnvme 0.7.5 00:25:24.008 00:25:24.008 Subprojects 00:25:24.008 spdk : NO Feature 'with-spdk' disabled 00:25:24.008 00:25:24.008 User defined options 00:25:24.008 examples : false 00:25:24.008 tests : false 00:25:24.008 tools : false 00:25:24.008 with-libaio : enabled 00:25:24.008 with-liburing: enabled 00:25:24.008 with-libvfn : disabled 00:25:24.008 with-spdk : disabled 00:25:24.008 00:25:24.008 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:24.008 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:25:24.008 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:25:24.008 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:25:24.008 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:25:24.008 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:25:24.008 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:25:24.008 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:25:24.008 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:25:24.008 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:25:24.008 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:25:24.008 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:25:24.008 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:25:24.267 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:25:24.267 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:25:24.267 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:25:24.267 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:25:24.267 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:25:24.267 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:25:24.267 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:25:24.267 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:25:24.267 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:25:24.267 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:25:24.267 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:25:24.267 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:25:24.267 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:25:24.267 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:25:24.267 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:25:24.267 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:25:24.267 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:25:24.267 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:25:24.267 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:25:24.267 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:25:24.267 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:25:24.267 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:25:24.267 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:25:24.267 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:25:24.267 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:25:24.267 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:25:24.267 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:25:24.525 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:25:24.525 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:25:24.525 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:25:24.525 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:25:24.525 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:25:24.525 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:25:24.525 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:25:24.525 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:25:24.525 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:25:24.525 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:25:24.525 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:25:24.525 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:25:24.525 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:25:24.525 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:25:24.525 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:25:24.525 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:25:24.525 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:25:24.525 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:25:24.525 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:25:24.525 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:25:24.525 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:25:24.525 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:25:24.525 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:25:24.525 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:25:24.525 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:25:24.525 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:25:24.525 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:25:24.525 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:25:24.782 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:25:24.782 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:25:24.782 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:25:24.782 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:25:24.782 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:25:24.782 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:25:24.782 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:25:25.039 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:25:25.039 [75/76] Linking static target lib/libxnvme.a 00:25:25.039 [76/76] Linking target lib/libxnvme.so.0.7.5 00:25:25.296 INFO: autodetecting backend as ninja 00:25:25.296 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:25.296 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:25:31.848 The Meson build system 00:25:31.848 Version: 1.5.0 00:25:31.848 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:25:31.848 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:25:31.848 Build type: native build 00:25:31.848 Program cat found: YES (/usr/bin/cat) 00:25:31.848 Project name: DPDK 00:25:31.848 Project version: 24.03.0 00:25:31.848 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:31.848 C linker for the host machine: cc ld.bfd 2.40-14 00:25:31.848 Host machine cpu family: x86_64 00:25:31.848 Host machine cpu: x86_64 00:25:31.848 Message: ## Building in Developer Mode ## 00:25:31.848 Program pkg-config found: YES (/usr/bin/pkg-config) 00:25:31.848 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:25:31.848 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:25:31.848 Program python3 found: YES (/usr/bin/python3) 00:25:31.848 Program cat found: YES (/usr/bin/cat) 00:25:31.848 Compiler for C supports arguments -march=native: YES 00:25:31.848 Checking for size of "void *" : 8 00:25:31.848 Checking for size of "void *" : 8 (cached) 00:25:31.848 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:25:31.848 Library m found: YES 00:25:31.848 Library numa found: YES 00:25:31.848 Has header "numaif.h" : YES 00:25:31.848 Library fdt found: NO 00:25:31.848 Library execinfo found: NO 00:25:31.848 Has header "execinfo.h" : YES 00:25:31.848 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:31.848 Run-time dependency libarchive found: NO (tried pkgconfig) 00:25:31.848 Run-time dependency libbsd found: NO (tried pkgconfig) 00:25:31.848 Run-time dependency jansson found: NO (tried pkgconfig) 00:25:31.849 Run-time dependency openssl found: YES 3.1.1 00:25:31.849 Run-time dependency libpcap found: YES 1.10.4 00:25:31.849 Has header "pcap.h" with dependency libpcap: YES 00:25:31.849 Compiler for C supports arguments -Wcast-qual: YES 00:25:31.849 Compiler for C supports arguments -Wdeprecated: YES 00:25:31.849 Compiler for C supports arguments -Wformat: YES 00:25:31.849 Compiler for C supports arguments -Wformat-nonliteral: NO 00:25:31.849 Compiler for C supports arguments -Wformat-security: NO 00:25:31.849 Compiler for C supports arguments -Wmissing-declarations: YES 00:25:31.849 Compiler for C supports arguments -Wmissing-prototypes: YES 00:25:31.849 Compiler for C supports arguments -Wnested-externs: YES 00:25:31.849 Compiler for C supports arguments -Wold-style-definition: YES 00:25:31.849 Compiler for C supports arguments -Wpointer-arith: YES 00:25:31.849 Compiler for C supports arguments -Wsign-compare: YES 00:25:31.849 Compiler for C supports arguments -Wstrict-prototypes: YES 00:25:31.849 Compiler for C supports arguments -Wundef: YES 00:25:31.849 Compiler for C supports arguments -Wwrite-strings: YES 00:25:31.849 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:25:31.849 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:25:31.849 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:25:31.849 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:25:31.849 Program objdump found: YES (/usr/bin/objdump) 00:25:31.849 Compiler for C supports arguments -mavx512f: YES 00:25:31.849 Checking if "AVX512 checking" compiles: YES 00:25:31.849 Fetching value of define "__SSE4_2__" : 1 00:25:31.849 Fetching value of define "__AES__" : 1 00:25:31.849 Fetching value of define "__AVX__" : 1 00:25:31.849 Fetching value of define "__AVX2__" : 1 00:25:31.849 Fetching value of define "__AVX512BW__" : 1 00:25:31.849 Fetching value of define "__AVX512CD__" : 1 00:25:31.849 Fetching value of define "__AVX512DQ__" : 1 00:25:31.849 Fetching value of define "__AVX512F__" : 1 00:25:31.849 Fetching value of define "__AVX512VL__" : 1 00:25:31.849 Fetching value of define "__PCLMUL__" : 1 00:25:31.849 Fetching value of define "__RDRND__" : 1 00:25:31.849 Fetching value of define "__RDSEED__" : 1 00:25:31.849 Fetching value of define "__VPCLMULQDQ__" : 1 00:25:31.849 Fetching value of define "__znver1__" : (undefined) 00:25:31.849 Fetching value of define "__znver2__" : (undefined) 00:25:31.849 Fetching value of define "__znver3__" : (undefined) 00:25:31.849 Fetching value of define "__znver4__" : (undefined) 00:25:31.849 Library asan found: YES 00:25:31.849 Compiler for C supports arguments -Wno-format-truncation: YES 00:25:31.849 Message: lib/log: Defining dependency "log" 00:25:31.849 Message: lib/kvargs: Defining dependency "kvargs" 00:25:31.849 Message: lib/telemetry: Defining dependency "telemetry" 00:25:31.849 Library rt found: YES 00:25:31.849 Checking for function "getentropy" : NO 00:25:31.849 Message: lib/eal: Defining dependency "eal" 00:25:31.849 Message: lib/ring: Defining dependency "ring" 00:25:31.849 Message: lib/rcu: Defining dependency "rcu" 00:25:31.849 Message: lib/mempool: Defining dependency "mempool" 00:25:31.849 Message: lib/mbuf: Defining dependency "mbuf" 00:25:31.849 Fetching value of define "__PCLMUL__" : 1 (cached) 00:25:31.849 Fetching value of define "__AVX512F__" : 1 (cached) 00:25:31.849 Fetching value of define "__AVX512BW__" : 1 (cached) 00:25:31.849 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:25:31.849 Fetching value of define "__AVX512VL__" : 1 (cached) 00:25:31.849 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:25:31.849 Compiler for C supports arguments -mpclmul: YES 00:25:31.849 Compiler for C supports arguments -maes: YES 00:25:31.849 Compiler for C supports arguments -mavx512f: YES (cached) 00:25:31.849 Compiler for C supports arguments -mavx512bw: YES 00:25:31.849 Compiler for C supports arguments -mavx512dq: YES 00:25:31.849 Compiler for C supports arguments -mavx512vl: YES 00:25:31.849 Compiler for C supports arguments -mvpclmulqdq: YES 00:25:31.849 Compiler for C supports arguments -mavx2: YES 00:25:31.849 Compiler for C supports arguments -mavx: YES 00:25:31.849 Message: lib/net: Defining dependency "net" 00:25:31.849 Message: lib/meter: Defining dependency "meter" 00:25:31.849 Message: lib/ethdev: Defining dependency "ethdev" 00:25:31.849 Message: lib/pci: Defining dependency "pci" 00:25:31.849 Message: lib/cmdline: Defining dependency "cmdline" 00:25:31.849 Message: lib/hash: Defining dependency "hash" 00:25:31.849 Message: lib/timer: Defining dependency "timer" 00:25:31.849 Message: lib/compressdev: Defining dependency "compressdev" 00:25:31.849 Message: lib/cryptodev: Defining dependency "cryptodev" 00:25:31.849 Message: lib/dmadev: Defining dependency "dmadev" 00:25:31.849 Compiler for C supports arguments -Wno-cast-qual: YES 00:25:31.849 Message: lib/power: Defining dependency "power" 00:25:31.849 Message: lib/reorder: Defining dependency "reorder" 00:25:31.849 Message: lib/security: Defining dependency "security" 00:25:31.849 Has header "linux/userfaultfd.h" : YES 00:25:31.849 Has header "linux/vduse.h" : YES 00:25:31.849 Message: lib/vhost: Defining dependency "vhost" 00:25:31.849 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:25:31.849 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:25:31.849 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:25:31.849 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:25:31.849 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:25:31.849 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:25:31.849 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:25:31.849 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:25:31.849 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:25:31.849 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:25:31.849 Program doxygen found: YES (/usr/local/bin/doxygen) 00:25:31.849 Configuring doxy-api-html.conf using configuration 00:25:31.849 Configuring doxy-api-man.conf using configuration 00:25:31.849 Program mandb found: YES (/usr/bin/mandb) 00:25:31.849 Program sphinx-build found: NO 00:25:31.849 Configuring rte_build_config.h using configuration 00:25:31.849 Message: 00:25:31.849 ================= 00:25:31.849 Applications Enabled 00:25:31.849 ================= 00:25:31.849 00:25:31.849 apps: 00:25:31.849 00:25:31.849 00:25:31.849 Message: 00:25:31.849 ================= 00:25:31.849 Libraries Enabled 00:25:31.849 ================= 00:25:31.849 00:25:31.849 libs: 00:25:31.849 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:25:31.849 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:25:31.849 cryptodev, dmadev, power, reorder, security, vhost, 00:25:31.849 00:25:31.849 Message: 00:25:31.849 =============== 00:25:31.849 Drivers Enabled 00:25:31.849 =============== 00:25:31.849 00:25:31.849 common: 00:25:31.849 00:25:31.849 bus: 00:25:31.849 pci, vdev, 00:25:31.849 mempool: 00:25:31.849 ring, 00:25:31.849 dma: 00:25:31.849 00:25:31.849 net: 00:25:31.849 00:25:31.849 crypto: 00:25:31.849 00:25:31.849 compress: 00:25:31.849 00:25:31.849 vdpa: 00:25:31.849 00:25:31.849 00:25:31.849 Message: 00:25:31.849 ================= 00:25:31.849 Content Skipped 00:25:31.849 ================= 00:25:31.849 00:25:31.849 apps: 00:25:31.849 dumpcap: explicitly disabled via build config 00:25:31.849 graph: explicitly disabled via build config 00:25:31.849 pdump: explicitly disabled via build config 00:25:31.849 proc-info: explicitly disabled via build config 00:25:31.849 test-acl: explicitly disabled via build config 00:25:31.849 test-bbdev: explicitly disabled via build config 00:25:31.849 test-cmdline: explicitly disabled via build config 00:25:31.849 test-compress-perf: explicitly disabled via build config 00:25:31.849 test-crypto-perf: explicitly disabled via build config 00:25:31.849 test-dma-perf: explicitly disabled via build config 00:25:31.849 test-eventdev: explicitly disabled via build config 00:25:31.849 test-fib: explicitly disabled via build config 00:25:31.849 test-flow-perf: explicitly disabled via build config 00:25:31.849 test-gpudev: explicitly disabled via build config 00:25:31.849 test-mldev: explicitly disabled via build config 00:25:31.849 test-pipeline: explicitly disabled via build config 00:25:31.849 test-pmd: explicitly disabled via build config 00:25:31.849 test-regex: explicitly disabled via build config 00:25:31.849 test-sad: explicitly disabled via build config 00:25:31.849 test-security-perf: explicitly disabled via build config 00:25:31.849 00:25:31.849 libs: 00:25:31.849 argparse: explicitly disabled via build config 00:25:31.849 metrics: explicitly disabled via build config 00:25:31.849 acl: explicitly disabled via build config 00:25:31.849 bbdev: explicitly disabled via build config 00:25:31.849 bitratestats: explicitly disabled via build config 00:25:31.849 bpf: explicitly disabled via build config 00:25:31.849 cfgfile: explicitly disabled via build config 00:25:31.849 distributor: explicitly disabled via build config 00:25:31.849 efd: explicitly disabled via build config 00:25:31.849 eventdev: explicitly disabled via build config 00:25:31.849 dispatcher: explicitly disabled via build config 00:25:31.849 gpudev: explicitly disabled via build config 00:25:31.849 gro: explicitly disabled via build config 00:25:31.849 gso: explicitly disabled via build config 00:25:31.849 ip_frag: explicitly disabled via build config 00:25:31.849 jobstats: explicitly disabled via build config 00:25:31.849 latencystats: explicitly disabled via build config 00:25:31.849 lpm: explicitly disabled via build config 00:25:31.849 member: explicitly disabled via build config 00:25:31.849 pcapng: explicitly disabled via build config 00:25:31.849 rawdev: explicitly disabled via build config 00:25:31.849 regexdev: explicitly disabled via build config 00:25:31.849 mldev: explicitly disabled via build config 00:25:31.849 rib: explicitly disabled via build config 00:25:31.849 sched: explicitly disabled via build config 00:25:31.849 stack: explicitly disabled via build config 00:25:31.849 ipsec: explicitly disabled via build config 00:25:31.849 pdcp: explicitly disabled via build config 00:25:31.849 fib: explicitly disabled via build config 00:25:31.849 port: explicitly disabled via build config 00:25:31.849 pdump: explicitly disabled via build config 00:25:31.849 table: explicitly disabled via build config 00:25:31.849 pipeline: explicitly disabled via build config 00:25:31.849 graph: explicitly disabled via build config 00:25:31.849 node: explicitly disabled via build config 00:25:31.849 00:25:31.849 drivers: 00:25:31.850 common/cpt: not in enabled drivers build config 00:25:31.850 common/dpaax: not in enabled drivers build config 00:25:31.850 common/iavf: not in enabled drivers build config 00:25:31.850 common/idpf: not in enabled drivers build config 00:25:31.850 common/ionic: not in enabled drivers build config 00:25:31.850 common/mvep: not in enabled drivers build config 00:25:31.850 common/octeontx: not in enabled drivers build config 00:25:31.850 bus/auxiliary: not in enabled drivers build config 00:25:31.850 bus/cdx: not in enabled drivers build config 00:25:31.850 bus/dpaa: not in enabled drivers build config 00:25:31.850 bus/fslmc: not in enabled drivers build config 00:25:31.850 bus/ifpga: not in enabled drivers build config 00:25:31.850 bus/platform: not in enabled drivers build config 00:25:31.850 bus/uacce: not in enabled drivers build config 00:25:31.850 bus/vmbus: not in enabled drivers build config 00:25:31.850 common/cnxk: not in enabled drivers build config 00:25:31.850 common/mlx5: not in enabled drivers build config 00:25:31.850 common/nfp: not in enabled drivers build config 00:25:31.850 common/nitrox: not in enabled drivers build config 00:25:31.850 common/qat: not in enabled drivers build config 00:25:31.850 common/sfc_efx: not in enabled drivers build config 00:25:31.850 mempool/bucket: not in enabled drivers build config 00:25:31.850 mempool/cnxk: not in enabled drivers build config 00:25:31.850 mempool/dpaa: not in enabled drivers build config 00:25:31.850 mempool/dpaa2: not in enabled drivers build config 00:25:31.850 mempool/octeontx: not in enabled drivers build config 00:25:31.850 mempool/stack: not in enabled drivers build config 00:25:31.850 dma/cnxk: not in enabled drivers build config 00:25:31.850 dma/dpaa: not in enabled drivers build config 00:25:31.850 dma/dpaa2: not in enabled drivers build config 00:25:31.850 dma/hisilicon: not in enabled drivers build config 00:25:31.850 dma/idxd: not in enabled drivers build config 00:25:31.850 dma/ioat: not in enabled drivers build config 00:25:31.850 dma/skeleton: not in enabled drivers build config 00:25:31.850 net/af_packet: not in enabled drivers build config 00:25:31.850 net/af_xdp: not in enabled drivers build config 00:25:31.850 net/ark: not in enabled drivers build config 00:25:31.850 net/atlantic: not in enabled drivers build config 00:25:31.850 net/avp: not in enabled drivers build config 00:25:31.850 net/axgbe: not in enabled drivers build config 00:25:31.850 net/bnx2x: not in enabled drivers build config 00:25:31.850 net/bnxt: not in enabled drivers build config 00:25:31.850 net/bonding: not in enabled drivers build config 00:25:31.850 net/cnxk: not in enabled drivers build config 00:25:31.850 net/cpfl: not in enabled drivers build config 00:25:31.850 net/cxgbe: not in enabled drivers build config 00:25:31.850 net/dpaa: not in enabled drivers build config 00:25:31.850 net/dpaa2: not in enabled drivers build config 00:25:31.850 net/e1000: not in enabled drivers build config 00:25:31.850 net/ena: not in enabled drivers build config 00:25:31.850 net/enetc: not in enabled drivers build config 00:25:31.850 net/enetfec: not in enabled drivers build config 00:25:31.850 net/enic: not in enabled drivers build config 00:25:31.850 net/failsafe: not in enabled drivers build config 00:25:31.850 net/fm10k: not in enabled drivers build config 00:25:31.850 net/gve: not in enabled drivers build config 00:25:31.850 net/hinic: not in enabled drivers build config 00:25:31.850 net/hns3: not in enabled drivers build config 00:25:31.850 net/i40e: not in enabled drivers build config 00:25:31.850 net/iavf: not in enabled drivers build config 00:25:31.850 net/ice: not in enabled drivers build config 00:25:31.850 net/idpf: not in enabled drivers build config 00:25:31.850 net/igc: not in enabled drivers build config 00:25:31.850 net/ionic: not in enabled drivers build config 00:25:31.850 net/ipn3ke: not in enabled drivers build config 00:25:31.850 net/ixgbe: not in enabled drivers build config 00:25:31.850 net/mana: not in enabled drivers build config 00:25:31.850 net/memif: not in enabled drivers build config 00:25:31.850 net/mlx4: not in enabled drivers build config 00:25:31.850 net/mlx5: not in enabled drivers build config 00:25:31.850 net/mvneta: not in enabled drivers build config 00:25:31.850 net/mvpp2: not in enabled drivers build config 00:25:31.850 net/netvsc: not in enabled drivers build config 00:25:31.850 net/nfb: not in enabled drivers build config 00:25:31.850 net/nfp: not in enabled drivers build config 00:25:31.850 net/ngbe: not in enabled drivers build config 00:25:31.850 net/null: not in enabled drivers build config 00:25:31.850 net/octeontx: not in enabled drivers build config 00:25:31.850 net/octeon_ep: not in enabled drivers build config 00:25:31.850 net/pcap: not in enabled drivers build config 00:25:31.850 net/pfe: not in enabled drivers build config 00:25:31.850 net/qede: not in enabled drivers build config 00:25:31.850 net/ring: not in enabled drivers build config 00:25:31.850 net/sfc: not in enabled drivers build config 00:25:31.850 net/softnic: not in enabled drivers build config 00:25:31.850 net/tap: not in enabled drivers build config 00:25:31.850 net/thunderx: not in enabled drivers build config 00:25:31.850 net/txgbe: not in enabled drivers build config 00:25:31.850 net/vdev_netvsc: not in enabled drivers build config 00:25:31.850 net/vhost: not in enabled drivers build config 00:25:31.850 net/virtio: not in enabled drivers build config 00:25:31.850 net/vmxnet3: not in enabled drivers build config 00:25:31.850 raw/*: missing internal dependency, "rawdev" 00:25:31.850 crypto/armv8: not in enabled drivers build config 00:25:31.850 crypto/bcmfs: not in enabled drivers build config 00:25:31.850 crypto/caam_jr: not in enabled drivers build config 00:25:31.850 crypto/ccp: not in enabled drivers build config 00:25:31.850 crypto/cnxk: not in enabled drivers build config 00:25:31.850 crypto/dpaa_sec: not in enabled drivers build config 00:25:31.850 crypto/dpaa2_sec: not in enabled drivers build config 00:25:31.850 crypto/ipsec_mb: not in enabled drivers build config 00:25:31.850 crypto/mlx5: not in enabled drivers build config 00:25:31.850 crypto/mvsam: not in enabled drivers build config 00:25:31.850 crypto/nitrox: not in enabled drivers build config 00:25:31.850 crypto/null: not in enabled drivers build config 00:25:31.850 crypto/octeontx: not in enabled drivers build config 00:25:31.850 crypto/openssl: not in enabled drivers build config 00:25:31.850 crypto/scheduler: not in enabled drivers build config 00:25:31.850 crypto/uadk: not in enabled drivers build config 00:25:31.850 crypto/virtio: not in enabled drivers build config 00:25:31.850 compress/isal: not in enabled drivers build config 00:25:31.850 compress/mlx5: not in enabled drivers build config 00:25:31.850 compress/nitrox: not in enabled drivers build config 00:25:31.850 compress/octeontx: not in enabled drivers build config 00:25:31.850 compress/zlib: not in enabled drivers build config 00:25:31.850 regex/*: missing internal dependency, "regexdev" 00:25:31.850 ml/*: missing internal dependency, "mldev" 00:25:31.850 vdpa/ifc: not in enabled drivers build config 00:25:31.850 vdpa/mlx5: not in enabled drivers build config 00:25:31.850 vdpa/nfp: not in enabled drivers build config 00:25:31.850 vdpa/sfc: not in enabled drivers build config 00:25:31.850 event/*: missing internal dependency, "eventdev" 00:25:31.850 baseband/*: missing internal dependency, "bbdev" 00:25:31.850 gpu/*: missing internal dependency, "gpudev" 00:25:31.850 00:25:31.850 00:25:31.850 Build targets in project: 84 00:25:31.850 00:25:31.850 DPDK 24.03.0 00:25:31.850 00:25:31.850 User defined options 00:25:31.850 buildtype : debug 00:25:31.850 default_library : shared 00:25:31.850 libdir : lib 00:25:31.850 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:31.850 b_sanitize : address 00:25:31.850 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:25:31.850 c_link_args : 00:25:31.850 cpu_instruction_set: native 00:25:31.850 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:25:31.850 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:25:31.850 enable_docs : false 00:25:31.850 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:25:31.850 enable_kmods : false 00:25:31.850 max_lcores : 128 00:25:31.850 tests : false 00:25:31.850 00:25:31.850 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:32.418 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:25:32.418 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:25:32.418 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:25:32.418 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:25:32.418 [4/267] Linking static target lib/librte_kvargs.a 00:25:32.418 [5/267] Linking static target lib/librte_log.a 00:25:32.418 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:25:32.675 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:25:32.675 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:25:32.675 [9/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:25:32.933 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:25:32.933 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:25:32.933 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:25:32.933 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:25:32.933 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:25:32.933 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:25:33.191 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:25:33.191 [17/267] Linking static target lib/librte_telemetry.a 00:25:33.191 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:25:33.191 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:25:33.191 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:25:33.191 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:25:33.191 [22/267] Linking target lib/librte_log.so.24.1 00:25:33.191 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:25:33.448 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:25:33.448 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:25:33.448 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:25:33.448 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:25:33.448 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:25:33.448 [29/267] Linking target lib/librte_kvargs.so.24.1 00:25:33.706 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:25:33.706 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:25:33.706 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:25:33.706 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:25:33.706 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:25:33.706 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:25:33.706 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:25:33.706 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:25:33.706 [38/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:25:33.963 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:25:33.963 [40/267] Linking target lib/librte_telemetry.so.24.1 00:25:33.963 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:25:33.963 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:25:33.963 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:25:33.963 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:25:33.963 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:25:34.220 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:25:34.220 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:25:34.220 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:25:34.478 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:25:34.478 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:25:34.478 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:25:34.478 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:25:34.478 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:25:34.478 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:25:34.478 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:25:34.478 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:25:34.478 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:25:34.478 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:25:34.736 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:25:34.736 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:25:34.736 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:25:34.736 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:25:34.736 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:25:34.736 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:25:34.737 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:25:34.737 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:25:34.994 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:25:34.994 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:25:35.253 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:25:35.253 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:25:35.253 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:25:35.253 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:25:35.253 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:25:35.253 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:25:35.253 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:25:35.253 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:25:35.253 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:25:35.253 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:25:35.512 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:25:35.512 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:25:35.512 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:25:35.512 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:25:35.512 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:25:35.769 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:25:35.769 [85/267] Linking static target lib/librte_eal.a 00:25:35.769 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:25:35.769 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:25:35.769 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:25:35.769 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:25:35.769 [90/267] Linking static target lib/librte_ring.a 00:25:35.769 [91/267] Linking static target lib/librte_rcu.a 00:25:35.769 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:25:35.769 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:25:35.769 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:25:35.769 [95/267] Linking static target lib/librte_mempool.a 00:25:36.029 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:25:36.029 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:25:36.030 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:25:36.291 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:25:36.291 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:25:36.291 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:25:36.291 [102/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:25:36.291 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:25:36.551 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:25:36.552 [105/267] Linking static target lib/librte_meter.a 00:25:36.552 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:25:36.552 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:25:36.552 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:25:36.552 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:25:36.552 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:25:36.552 [111/267] Linking static target lib/librte_mbuf.a 00:25:36.552 [112/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:25:36.552 [113/267] Linking static target lib/librte_net.a 00:25:36.812 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:25:36.812 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:25:36.812 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.091 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:25:37.091 [118/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.091 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:25:37.387 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:25:37.387 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:25:37.387 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:25:37.387 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:25:37.387 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.387 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:25:37.646 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:25:37.646 [127/267] Linking static target lib/librte_pci.a 00:25:37.646 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:25:37.646 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:25:37.646 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:25:37.646 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:25:37.646 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:25:37.646 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:25:37.646 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:25:37.646 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:25:37.646 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:25:37.646 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:25:37.906 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:25:37.906 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:25:37.906 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.906 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:25:37.906 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:25:37.906 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:25:37.906 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:25:37.906 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:25:37.906 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:25:37.906 [147/267] Linking static target lib/librte_cmdline.a 00:25:38.164 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:25:38.164 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:25:38.164 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:25:38.423 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:25:38.424 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:25:38.424 [153/267] Linking static target lib/librte_timer.a 00:25:38.424 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:25:38.424 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:25:38.424 [156/267] Linking static target lib/librte_compressdev.a 00:25:38.685 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:25:38.685 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:25:38.685 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:25:38.685 [160/267] Linking static target lib/librte_ethdev.a 00:25:38.685 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:25:38.685 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:25:38.685 [163/267] Linking static target lib/librte_hash.a 00:25:38.685 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:25:38.685 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:25:38.943 [166/267] Linking static target lib/librte_dmadev.a 00:25:38.943 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.943 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:25:38.943 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:25:39.201 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:25:39.201 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:25:39.201 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.201 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.201 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:25:39.512 [175/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.512 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:25:39.512 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:25:39.512 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:25:39.512 [179/267] Linking static target lib/librte_cryptodev.a 00:25:39.512 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:25:39.512 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:25:39.512 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:25:39.512 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.512 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:25:39.772 [185/267] Linking static target lib/librte_power.a 00:25:39.772 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:25:39.772 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:25:39.772 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:25:40.033 [189/267] Linking static target lib/librte_reorder.a 00:25:40.033 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:25:40.033 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:25:40.033 [192/267] Linking static target lib/librte_security.a 00:25:40.294 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:25:40.294 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.554 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:25:40.554 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.554 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.554 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:25:40.554 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:25:40.814 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:25:40.814 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:25:40.814 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:25:41.078 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:25:41.078 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:25:41.078 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:25:41.078 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:25:41.078 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:25:41.336 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:41.336 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:25:41.336 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:25:41.336 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:25:41.336 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:41.336 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:41.336 [214/267] Linking static target drivers/librte_bus_vdev.a 00:25:41.336 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:25:41.336 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:41.336 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:41.594 [218/267] Linking static target drivers/librte_bus_pci.a 00:25:41.594 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:25:41.594 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:25:41.594 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:41.855 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:25:41.855 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:41.855 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:41.855 [225/267] Linking static target drivers/librte_mempool_ring.a 00:25:41.855 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:42.116 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:43.493 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.493 [229/267] Linking target lib/librte_eal.so.24.1 00:25:43.493 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:25:43.493 [231/267] Linking target lib/librte_pci.so.24.1 00:25:43.493 [232/267] Linking target lib/librte_meter.so.24.1 00:25:43.493 [233/267] Linking target drivers/librte_bus_vdev.so.24.1 00:25:43.493 [234/267] Linking target lib/librte_ring.so.24.1 00:25:43.493 [235/267] Linking target lib/librte_timer.so.24.1 00:25:43.493 [236/267] Linking target lib/librte_dmadev.so.24.1 00:25:43.493 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:25:43.493 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:25:43.493 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:25:43.493 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:25:43.493 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:25:43.493 [242/267] Linking target lib/librte_mempool.so.24.1 00:25:43.493 [243/267] Linking target lib/librte_rcu.so.24.1 00:25:43.493 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:25:43.755 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:25:43.755 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:25:43.755 [247/267] Linking target lib/librte_mbuf.so.24.1 00:25:43.755 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:25:43.755 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:25:43.755 [250/267] Linking target lib/librte_net.so.24.1 00:25:43.755 [251/267] Linking target lib/librte_reorder.so.24.1 00:25:43.755 [252/267] Linking target lib/librte_compressdev.so.24.1 00:25:43.755 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:25:44.018 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:25:44.018 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:25:44.018 [256/267] Linking target lib/librte_hash.so.24.1 00:25:44.018 [257/267] Linking target lib/librte_cmdline.so.24.1 00:25:44.018 [258/267] Linking target lib/librte_security.so.24.1 00:25:44.018 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:25:44.281 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:44.281 [261/267] Linking target lib/librte_ethdev.so.24.1 00:25:44.281 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:25:44.281 [263/267] Linking target lib/librte_power.so.24.1 00:25:44.851 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:44.851 [265/267] Linking static target lib/librte_vhost.a 00:25:46.241 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:25:46.241 [267/267] Linking target lib/librte_vhost.so.24.1 00:25:46.241 INFO: autodetecting backend as ninja 00:25:46.241 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:26:01.158 CC lib/log/log_flags.o 00:26:01.158 CC lib/log/log_deprecated.o 00:26:01.158 CC lib/log/log.o 00:26:01.158 CC lib/ut/ut.o 00:26:01.158 CC lib/ut_mock/mock.o 00:26:01.158 LIB libspdk_log.a 00:26:01.158 LIB libspdk_ut.a 00:26:01.158 LIB libspdk_ut_mock.a 00:26:01.158 SO libspdk_ut.so.2.0 00:26:01.158 SO libspdk_log.so.7.1 00:26:01.158 SO libspdk_ut_mock.so.6.0 00:26:01.158 SYMLINK libspdk_ut.so 00:26:01.158 SYMLINK libspdk_ut_mock.so 00:26:01.158 SYMLINK libspdk_log.so 00:26:01.158 CC lib/util/bit_array.o 00:26:01.158 CC lib/util/base64.o 00:26:01.158 CC lib/util/crc16.o 00:26:01.158 CC lib/util/crc32.o 00:26:01.158 CC lib/util/cpuset.o 00:26:01.158 CC lib/util/crc32c.o 00:26:01.158 CC lib/ioat/ioat.o 00:26:01.158 CXX lib/trace_parser/trace.o 00:26:01.158 CC lib/dma/dma.o 00:26:01.158 CC lib/vfio_user/host/vfio_user_pci.o 00:26:01.158 CC lib/util/crc32_ieee.o 00:26:01.158 CC lib/vfio_user/host/vfio_user.o 00:26:01.158 CC lib/util/crc64.o 00:26:01.158 CC lib/util/dif.o 00:26:01.158 CC lib/util/fd.o 00:26:01.158 LIB libspdk_dma.a 00:26:01.158 LIB libspdk_ioat.a 00:26:01.158 CC lib/util/fd_group.o 00:26:01.158 SO libspdk_dma.so.5.0 00:26:01.158 CC lib/util/file.o 00:26:01.158 SO libspdk_ioat.so.7.0 00:26:01.158 CC lib/util/hexlify.o 00:26:01.158 SYMLINK libspdk_dma.so 00:26:01.158 CC lib/util/iov.o 00:26:01.158 SYMLINK libspdk_ioat.so 00:26:01.158 CC lib/util/math.o 00:26:01.158 CC lib/util/net.o 00:26:01.158 CC lib/util/pipe.o 00:26:01.158 LIB libspdk_vfio_user.a 00:26:01.158 SO libspdk_vfio_user.so.5.0 00:26:01.158 CC lib/util/strerror_tls.o 00:26:01.158 CC lib/util/string.o 00:26:01.158 SYMLINK libspdk_vfio_user.so 00:26:01.158 CC lib/util/uuid.o 00:26:01.158 CC lib/util/xor.o 00:26:01.158 CC lib/util/zipf.o 00:26:01.158 CC lib/util/md5.o 00:26:01.158 LIB libspdk_trace_parser.a 00:26:01.158 LIB libspdk_util.a 00:26:01.158 SO libspdk_trace_parser.so.6.0 00:26:01.158 SO libspdk_util.so.10.1 00:26:01.158 SYMLINK libspdk_trace_parser.so 00:26:01.158 SYMLINK libspdk_util.so 00:26:01.419 CC lib/idxd/idxd.o 00:26:01.419 CC lib/idxd/idxd_kernel.o 00:26:01.419 CC lib/json/json_parse.o 00:26:01.419 CC lib/conf/conf.o 00:26:01.419 CC lib/idxd/idxd_user.o 00:26:01.419 CC lib/json/json_util.o 00:26:01.419 CC lib/json/json_write.o 00:26:01.419 CC lib/env_dpdk/env.o 00:26:01.419 CC lib/rdma_utils/rdma_utils.o 00:26:01.419 CC lib/vmd/vmd.o 00:26:01.419 CC lib/vmd/led.o 00:26:01.419 LIB libspdk_conf.a 00:26:01.419 CC lib/env_dpdk/memory.o 00:26:01.419 SO libspdk_conf.so.6.0 00:26:01.682 CC lib/env_dpdk/pci.o 00:26:01.682 LIB libspdk_rdma_utils.a 00:26:01.682 SYMLINK libspdk_conf.so 00:26:01.682 CC lib/env_dpdk/init.o 00:26:01.682 CC lib/env_dpdk/threads.o 00:26:01.682 SO libspdk_rdma_utils.so.1.0 00:26:01.682 LIB libspdk_json.a 00:26:01.682 CC lib/env_dpdk/pci_ioat.o 00:26:01.682 SO libspdk_json.so.6.0 00:26:01.682 SYMLINK libspdk_rdma_utils.so 00:26:01.682 SYMLINK libspdk_json.so 00:26:01.682 CC lib/env_dpdk/pci_virtio.o 00:26:01.682 CC lib/env_dpdk/pci_vmd.o 00:26:01.682 CC lib/rdma_provider/common.o 00:26:01.682 CC lib/env_dpdk/pci_idxd.o 00:26:01.944 CC lib/jsonrpc/jsonrpc_server.o 00:26:01.944 CC lib/env_dpdk/pci_event.o 00:26:01.944 LIB libspdk_idxd.a 00:26:01.944 CC lib/env_dpdk/sigbus_handler.o 00:26:01.944 CC lib/env_dpdk/pci_dpdk.o 00:26:01.944 SO libspdk_idxd.so.12.1 00:26:01.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:26:01.944 CC lib/rdma_provider/rdma_provider_verbs.o 00:26:01.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:26:01.944 SYMLINK libspdk_idxd.so 00:26:01.944 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:26:01.944 CC lib/jsonrpc/jsonrpc_client.o 00:26:01.944 LIB libspdk_vmd.a 00:26:01.944 SO libspdk_vmd.so.6.0 00:26:01.944 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:26:01.944 SYMLINK libspdk_vmd.so 00:26:02.206 LIB libspdk_rdma_provider.a 00:26:02.206 SO libspdk_rdma_provider.so.7.0 00:26:02.206 SYMLINK libspdk_rdma_provider.so 00:26:02.206 LIB libspdk_jsonrpc.a 00:26:02.206 SO libspdk_jsonrpc.so.6.0 00:26:02.469 SYMLINK libspdk_jsonrpc.so 00:26:02.469 CC lib/rpc/rpc.o 00:26:02.732 LIB libspdk_rpc.a 00:26:02.732 LIB libspdk_env_dpdk.a 00:26:02.732 SO libspdk_rpc.so.6.0 00:26:02.732 SYMLINK libspdk_rpc.so 00:26:02.732 SO libspdk_env_dpdk.so.15.1 00:26:02.994 SYMLINK libspdk_env_dpdk.so 00:26:02.994 CC lib/keyring/keyring.o 00:26:02.994 CC lib/keyring/keyring_rpc.o 00:26:02.994 CC lib/notify/notify.o 00:26:02.994 CC lib/notify/notify_rpc.o 00:26:02.994 CC lib/trace/trace.o 00:26:02.994 CC lib/trace/trace_rpc.o 00:26:02.994 CC lib/trace/trace_flags.o 00:26:03.254 LIB libspdk_notify.a 00:26:03.255 LIB libspdk_keyring.a 00:26:03.255 SO libspdk_notify.so.6.0 00:26:03.255 SO libspdk_keyring.so.2.0 00:26:03.255 SYMLINK libspdk_notify.so 00:26:03.255 LIB libspdk_trace.a 00:26:03.255 SYMLINK libspdk_keyring.so 00:26:03.255 SO libspdk_trace.so.11.0 00:26:03.255 SYMLINK libspdk_trace.so 00:26:03.515 CC lib/sock/sock_rpc.o 00:26:03.515 CC lib/sock/sock.o 00:26:03.515 CC lib/thread/thread.o 00:26:03.515 CC lib/thread/iobuf.o 00:26:04.088 LIB libspdk_sock.a 00:26:04.088 SO libspdk_sock.so.10.0 00:26:04.088 SYMLINK libspdk_sock.so 00:26:04.347 CC lib/nvme/nvme_ctrlr_cmd.o 00:26:04.347 CC lib/nvme/nvme_ns.o 00:26:04.347 CC lib/nvme/nvme_fabric.o 00:26:04.347 CC lib/nvme/nvme_ctrlr.o 00:26:04.347 CC lib/nvme/nvme_ns_cmd.o 00:26:04.347 CC lib/nvme/nvme_pcie_common.o 00:26:04.347 CC lib/nvme/nvme_qpair.o 00:26:04.347 CC lib/nvme/nvme_pcie.o 00:26:04.347 CC lib/nvme/nvme.o 00:26:04.913 CC lib/nvme/nvme_quirks.o 00:26:04.913 CC lib/nvme/nvme_transport.o 00:26:04.913 CC lib/nvme/nvme_discovery.o 00:26:04.913 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:26:05.172 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:05.172 LIB libspdk_thread.a 00:26:05.172 CC lib/nvme/nvme_tcp.o 00:26:05.172 SO libspdk_thread.so.11.0 00:26:05.172 CC lib/nvme/nvme_opal.o 00:26:05.172 SYMLINK libspdk_thread.so 00:26:05.172 CC lib/nvme/nvme_io_msg.o 00:26:05.172 CC lib/nvme/nvme_poll_group.o 00:26:05.433 CC lib/nvme/nvme_zns.o 00:26:05.433 CC lib/accel/accel.o 00:26:05.433 CC lib/blob/blobstore.o 00:26:05.433 CC lib/blob/request.o 00:26:05.693 CC lib/blob/zeroes.o 00:26:05.693 CC lib/blob/blob_bs_dev.o 00:26:05.693 CC lib/nvme/nvme_stubs.o 00:26:05.693 CC lib/nvme/nvme_auth.o 00:26:05.693 CC lib/accel/accel_rpc.o 00:26:05.955 CC lib/accel/accel_sw.o 00:26:05.955 CC lib/init/json_config.o 00:26:05.955 CC lib/virtio/virtio.o 00:26:06.216 CC lib/fsdev/fsdev.o 00:26:06.216 CC lib/virtio/virtio_vhost_user.o 00:26:06.216 CC lib/virtio/virtio_vfio_user.o 00:26:06.216 CC lib/init/subsystem.o 00:26:06.216 CC lib/init/subsystem_rpc.o 00:26:06.477 CC lib/init/rpc.o 00:26:06.477 CC lib/nvme/nvme_cuse.o 00:26:06.477 CC lib/virtio/virtio_pci.o 00:26:06.477 CC lib/nvme/nvme_rdma.o 00:26:06.477 CC lib/fsdev/fsdev_io.o 00:26:06.477 LIB libspdk_init.a 00:26:06.477 SO libspdk_init.so.6.0 00:26:06.477 CC lib/fsdev/fsdev_rpc.o 00:26:06.739 SYMLINK libspdk_init.so 00:26:06.739 LIB libspdk_accel.a 00:26:06.739 SO libspdk_accel.so.16.0 00:26:06.739 LIB libspdk_virtio.a 00:26:06.739 CC lib/event/app.o 00:26:06.739 CC lib/event/reactor.o 00:26:06.739 CC lib/event/app_rpc.o 00:26:06.739 CC lib/event/log_rpc.o 00:26:06.739 SYMLINK libspdk_accel.so 00:26:06.739 CC lib/event/scheduler_static.o 00:26:06.739 SO libspdk_virtio.so.7.0 00:26:06.739 SYMLINK libspdk_virtio.so 00:26:06.739 LIB libspdk_fsdev.a 00:26:06.999 SO libspdk_fsdev.so.2.0 00:26:06.999 SYMLINK libspdk_fsdev.so 00:26:06.999 CC lib/bdev/bdev.o 00:26:06.999 CC lib/bdev/bdev_rpc.o 00:26:06.999 CC lib/bdev/bdev_zone.o 00:26:06.999 CC lib/bdev/part.o 00:26:06.999 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:26:07.260 CC lib/bdev/scsi_nvme.o 00:26:07.260 LIB libspdk_event.a 00:26:07.260 SO libspdk_event.so.14.0 00:26:07.260 SYMLINK libspdk_event.so 00:26:07.832 LIB libspdk_fuse_dispatcher.a 00:26:07.832 SO libspdk_fuse_dispatcher.so.1.0 00:26:07.832 SYMLINK libspdk_fuse_dispatcher.so 00:26:07.832 LIB libspdk_nvme.a 00:26:08.092 SO libspdk_nvme.so.15.0 00:26:08.352 SYMLINK libspdk_nvme.so 00:26:08.926 LIB libspdk_blob.a 00:26:08.926 SO libspdk_blob.so.12.0 00:26:08.926 SYMLINK libspdk_blob.so 00:26:09.187 CC lib/blobfs/tree.o 00:26:09.187 CC lib/blobfs/blobfs.o 00:26:09.187 CC lib/lvol/lvol.o 00:26:09.758 LIB libspdk_bdev.a 00:26:09.758 SO libspdk_bdev.so.17.0 00:26:10.019 SYMLINK libspdk_bdev.so 00:26:10.019 CC lib/nbd/nbd.o 00:26:10.019 CC lib/nbd/nbd_rpc.o 00:26:10.019 CC lib/scsi/lun.o 00:26:10.019 CC lib/scsi/port.o 00:26:10.019 CC lib/scsi/dev.o 00:26:10.019 CC lib/nvmf/ctrlr.o 00:26:10.019 CC lib/ublk/ublk.o 00:26:10.019 CC lib/ftl/ftl_core.o 00:26:10.019 LIB libspdk_blobfs.a 00:26:10.019 SO libspdk_blobfs.so.11.0 00:26:10.280 SYMLINK libspdk_blobfs.so 00:26:10.280 CC lib/ftl/ftl_init.o 00:26:10.280 LIB libspdk_lvol.a 00:26:10.280 CC lib/scsi/scsi.o 00:26:10.280 CC lib/ftl/ftl_layout.o 00:26:10.280 SO libspdk_lvol.so.11.0 00:26:10.280 SYMLINK libspdk_lvol.so 00:26:10.280 CC lib/ftl/ftl_debug.o 00:26:10.280 CC lib/ftl/ftl_io.o 00:26:10.280 CC lib/scsi/scsi_bdev.o 00:26:10.280 CC lib/scsi/scsi_pr.o 00:26:10.280 CC lib/scsi/scsi_rpc.o 00:26:10.280 CC lib/scsi/task.o 00:26:10.540 LIB libspdk_nbd.a 00:26:10.540 SO libspdk_nbd.so.7.0 00:26:10.540 CC lib/ublk/ublk_rpc.o 00:26:10.540 CC lib/ftl/ftl_sb.o 00:26:10.540 CC lib/ftl/ftl_l2p.o 00:26:10.540 SYMLINK libspdk_nbd.so 00:26:10.540 CC lib/ftl/ftl_l2p_flat.o 00:26:10.540 CC lib/ftl/ftl_nv_cache.o 00:26:10.540 CC lib/ftl/ftl_band.o 00:26:10.540 CC lib/ftl/ftl_band_ops.o 00:26:10.801 CC lib/ftl/ftl_writer.o 00:26:10.801 CC lib/ftl/ftl_rq.o 00:26:10.801 CC lib/ftl/ftl_reloc.o 00:26:10.801 CC lib/ftl/ftl_l2p_cache.o 00:26:10.801 LIB libspdk_ublk.a 00:26:10.801 SO libspdk_ublk.so.3.0 00:26:10.801 LIB libspdk_scsi.a 00:26:10.801 SYMLINK libspdk_ublk.so 00:26:10.801 CC lib/nvmf/ctrlr_discovery.o 00:26:10.801 SO libspdk_scsi.so.9.0 00:26:10.801 CC lib/ftl/ftl_p2l.o 00:26:10.801 CC lib/nvmf/ctrlr_bdev.o 00:26:10.801 CC lib/ftl/ftl_p2l_log.o 00:26:11.062 SYMLINK libspdk_scsi.so 00:26:11.062 CC lib/ftl/mngt/ftl_mngt.o 00:26:11.062 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:11.062 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:11.323 CC lib/iscsi/conn.o 00:26:11.323 CC lib/iscsi/init_grp.o 00:26:11.323 CC lib/iscsi/iscsi.o 00:26:11.323 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:11.323 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:11.323 CC lib/nvmf/subsystem.o 00:26:11.323 CC lib/vhost/vhost.o 00:26:11.323 CC lib/vhost/vhost_rpc.o 00:26:11.584 CC lib/vhost/vhost_scsi.o 00:26:11.584 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:11.584 CC lib/iscsi/param.o 00:26:11.584 CC lib/iscsi/portal_grp.o 00:26:11.584 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:11.846 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:11.846 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:11.846 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:11.846 CC lib/iscsi/tgt_node.o 00:26:11.846 CC lib/vhost/vhost_blk.o 00:26:11.846 CC lib/vhost/rte_vhost_user.o 00:26:11.846 CC lib/iscsi/iscsi_subsystem.o 00:26:12.108 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:12.108 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:12.108 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:12.383 CC lib/ftl/utils/ftl_conf.o 00:26:12.383 CC lib/nvmf/nvmf.o 00:26:12.383 CC lib/nvmf/nvmf_rpc.o 00:26:12.383 CC lib/nvmf/transport.o 00:26:12.383 CC lib/iscsi/iscsi_rpc.o 00:26:12.383 CC lib/ftl/utils/ftl_md.o 00:26:12.645 CC lib/ftl/utils/ftl_mempool.o 00:26:12.645 CC lib/ftl/utils/ftl_bitmap.o 00:26:12.645 CC lib/ftl/utils/ftl_property.o 00:26:12.645 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:12.645 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:12.907 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:12.907 CC lib/iscsi/task.o 00:26:12.907 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:12.907 LIB libspdk_vhost.a 00:26:12.907 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:12.907 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:12.907 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:12.907 SO libspdk_vhost.so.8.0 00:26:12.907 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:12.907 LIB libspdk_iscsi.a 00:26:12.907 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:13.168 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:13.168 SO libspdk_iscsi.so.8.0 00:26:13.168 SYMLINK libspdk_vhost.so 00:26:13.168 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:13.168 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:26:13.168 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:26:13.168 CC lib/nvmf/tcp.o 00:26:13.168 CC lib/nvmf/stubs.o 00:26:13.168 CC lib/nvmf/mdns_server.o 00:26:13.168 CC lib/ftl/base/ftl_base_dev.o 00:26:13.168 CC lib/nvmf/rdma.o 00:26:13.168 SYMLINK libspdk_iscsi.so 00:26:13.168 CC lib/ftl/base/ftl_base_bdev.o 00:26:13.168 CC lib/ftl/ftl_trace.o 00:26:13.168 CC lib/nvmf/auth.o 00:26:13.428 LIB libspdk_ftl.a 00:26:13.691 SO libspdk_ftl.so.9.0 00:26:13.954 SYMLINK libspdk_ftl.so 00:26:15.341 LIB libspdk_nvmf.a 00:26:15.603 SO libspdk_nvmf.so.20.0 00:26:15.603 SYMLINK libspdk_nvmf.so 00:26:15.868 CC module/env_dpdk/env_dpdk_rpc.o 00:26:16.128 CC module/keyring/file/keyring.o 00:26:16.128 CC module/blob/bdev/blob_bdev.o 00:26:16.128 CC module/sock/posix/posix.o 00:26:16.128 CC module/accel/error/accel_error.o 00:26:16.128 CC module/scheduler/dynamic/scheduler_dynamic.o 00:26:16.128 CC module/keyring/linux/keyring.o 00:26:16.128 CC module/fsdev/aio/fsdev_aio.o 00:26:16.128 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:26:16.128 CC module/accel/ioat/accel_ioat.o 00:26:16.128 LIB libspdk_env_dpdk_rpc.a 00:26:16.128 SO libspdk_env_dpdk_rpc.so.6.0 00:26:16.128 CC module/keyring/file/keyring_rpc.o 00:26:16.128 SYMLINK libspdk_env_dpdk_rpc.so 00:26:16.128 CC module/accel/ioat/accel_ioat_rpc.o 00:26:16.128 CC module/keyring/linux/keyring_rpc.o 00:26:16.128 LIB libspdk_scheduler_dpdk_governor.a 00:26:16.128 SO libspdk_scheduler_dpdk_governor.so.4.0 00:26:16.128 CC module/accel/error/accel_error_rpc.o 00:26:16.128 LIB libspdk_scheduler_dynamic.a 00:26:16.388 LIB libspdk_keyring_file.a 00:26:16.388 SO libspdk_scheduler_dynamic.so.4.0 00:26:16.388 LIB libspdk_accel_ioat.a 00:26:16.388 LIB libspdk_keyring_linux.a 00:26:16.388 LIB libspdk_blob_bdev.a 00:26:16.388 SYMLINK libspdk_scheduler_dpdk_governor.so 00:26:16.388 SO libspdk_keyring_file.so.2.0 00:26:16.388 SO libspdk_accel_ioat.so.6.0 00:26:16.388 SO libspdk_keyring_linux.so.1.0 00:26:16.388 SYMLINK libspdk_scheduler_dynamic.so 00:26:16.388 SO libspdk_blob_bdev.so.12.0 00:26:16.388 SYMLINK libspdk_accel_ioat.so 00:26:16.388 SYMLINK libspdk_keyring_file.so 00:26:16.388 CC module/fsdev/aio/fsdev_aio_rpc.o 00:26:16.388 LIB libspdk_accel_error.a 00:26:16.388 CC module/fsdev/aio/linux_aio_mgr.o 00:26:16.388 SYMLINK libspdk_keyring_linux.so 00:26:16.388 SO libspdk_accel_error.so.2.0 00:26:16.388 SYMLINK libspdk_blob_bdev.so 00:26:16.388 CC module/scheduler/gscheduler/gscheduler.o 00:26:16.388 SYMLINK libspdk_accel_error.so 00:26:16.388 CC module/accel/dsa/accel_dsa.o 00:26:16.388 CC module/accel/iaa/accel_iaa.o 00:26:16.388 CC module/accel/dsa/accel_dsa_rpc.o 00:26:16.648 LIB libspdk_scheduler_gscheduler.a 00:26:16.648 SO libspdk_scheduler_gscheduler.so.4.0 00:26:16.648 CC module/blobfs/bdev/blobfs_bdev.o 00:26:16.648 CC module/bdev/delay/vbdev_delay.o 00:26:16.648 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:16.648 CC module/bdev/error/vbdev_error.o 00:26:16.648 SYMLINK libspdk_scheduler_gscheduler.so 00:26:16.648 CC module/bdev/error/vbdev_error_rpc.o 00:26:16.648 CC module/bdev/gpt/gpt.o 00:26:16.648 CC module/accel/iaa/accel_iaa_rpc.o 00:26:16.648 LIB libspdk_fsdev_aio.a 00:26:16.648 CC module/bdev/gpt/vbdev_gpt.o 00:26:16.648 LIB libspdk_accel_dsa.a 00:26:16.908 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:16.908 SO libspdk_fsdev_aio.so.1.0 00:26:16.908 SO libspdk_accel_dsa.so.5.0 00:26:16.908 LIB libspdk_sock_posix.a 00:26:16.908 LIB libspdk_accel_iaa.a 00:26:16.908 SO libspdk_sock_posix.so.6.0 00:26:16.908 SO libspdk_accel_iaa.so.3.0 00:26:16.908 SYMLINK libspdk_fsdev_aio.so 00:26:16.908 SYMLINK libspdk_accel_dsa.so 00:26:16.908 LIB libspdk_bdev_error.a 00:26:16.908 SYMLINK libspdk_accel_iaa.so 00:26:16.908 LIB libspdk_bdev_delay.a 00:26:16.908 SYMLINK libspdk_sock_posix.so 00:26:16.908 SO libspdk_bdev_error.so.6.0 00:26:16.908 CC module/bdev/lvol/vbdev_lvol.o 00:26:16.908 SO libspdk_bdev_delay.so.6.0 00:26:16.908 LIB libspdk_blobfs_bdev.a 00:26:16.908 CC module/bdev/malloc/bdev_malloc.o 00:26:16.908 SYMLINK libspdk_bdev_error.so 00:26:16.908 SO libspdk_blobfs_bdev.so.6.0 00:26:16.908 CC module/bdev/null/bdev_null.o 00:26:16.908 CC module/bdev/nvme/bdev_nvme.o 00:26:16.908 SYMLINK libspdk_bdev_delay.so 00:26:16.908 LIB libspdk_bdev_gpt.a 00:26:16.908 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:17.169 CC module/bdev/passthru/vbdev_passthru.o 00:26:17.170 SO libspdk_bdev_gpt.so.6.0 00:26:17.170 SYMLINK libspdk_blobfs_bdev.so 00:26:17.170 CC module/bdev/raid/bdev_raid.o 00:26:17.170 CC module/bdev/raid/bdev_raid_rpc.o 00:26:17.170 SYMLINK libspdk_bdev_gpt.so 00:26:17.170 CC module/bdev/raid/bdev_raid_sb.o 00:26:17.170 CC module/bdev/split/vbdev_split.o 00:26:17.170 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:17.170 CC module/bdev/null/bdev_null_rpc.o 00:26:17.170 CC module/bdev/nvme/nvme_rpc.o 00:26:17.431 CC module/bdev/split/vbdev_split_rpc.o 00:26:17.431 LIB libspdk_bdev_malloc.a 00:26:17.431 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:17.431 SO libspdk_bdev_malloc.so.6.0 00:26:17.431 LIB libspdk_bdev_null.a 00:26:17.431 SYMLINK libspdk_bdev_malloc.so 00:26:17.431 SO libspdk_bdev_null.so.6.0 00:26:17.431 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:17.431 LIB libspdk_bdev_split.a 00:26:17.431 LIB libspdk_bdev_passthru.a 00:26:17.432 SO libspdk_bdev_split.so.6.0 00:26:17.432 SYMLINK libspdk_bdev_null.so 00:26:17.432 SO libspdk_bdev_passthru.so.6.0 00:26:17.432 CC module/bdev/nvme/bdev_mdns_client.o 00:26:17.432 SYMLINK libspdk_bdev_split.so 00:26:17.432 CC module/bdev/nvme/vbdev_opal.o 00:26:17.432 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:17.692 SYMLINK libspdk_bdev_passthru.so 00:26:17.692 CC module/bdev/xnvme/bdev_xnvme.o 00:26:17.692 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:26:17.692 CC module/bdev/aio/bdev_aio.o 00:26:17.692 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:17.692 CC module/bdev/aio/bdev_aio_rpc.o 00:26:17.692 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:17.692 LIB libspdk_bdev_lvol.a 00:26:17.692 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:17.953 SO libspdk_bdev_lvol.so.6.0 00:26:17.953 LIB libspdk_bdev_xnvme.a 00:26:17.953 SO libspdk_bdev_xnvme.so.3.0 00:26:17.953 CC module/bdev/raid/raid0.o 00:26:17.953 CC module/bdev/raid/raid1.o 00:26:17.953 SYMLINK libspdk_bdev_lvol.so 00:26:17.953 SYMLINK libspdk_bdev_xnvme.so 00:26:17.953 LIB libspdk_bdev_zone_block.a 00:26:17.953 SO libspdk_bdev_zone_block.so.6.0 00:26:17.953 LIB libspdk_bdev_aio.a 00:26:17.953 CC module/bdev/raid/concat.o 00:26:17.953 SO libspdk_bdev_aio.so.6.0 00:26:17.953 SYMLINK libspdk_bdev_zone_block.so 00:26:17.953 CC module/bdev/ftl/bdev_ftl.o 00:26:17.953 CC module/bdev/iscsi/bdev_iscsi.o 00:26:17.953 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:17.953 SYMLINK libspdk_bdev_aio.so 00:26:17.953 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:26:17.953 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:18.215 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:18.215 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:18.215 LIB libspdk_bdev_raid.a 00:26:18.215 SO libspdk_bdev_raid.so.6.0 00:26:18.215 LIB libspdk_bdev_ftl.a 00:26:18.215 SYMLINK libspdk_bdev_raid.so 00:26:18.474 SO libspdk_bdev_ftl.so.6.0 00:26:18.474 LIB libspdk_bdev_iscsi.a 00:26:18.474 SYMLINK libspdk_bdev_ftl.so 00:26:18.474 SO libspdk_bdev_iscsi.so.6.0 00:26:18.474 SYMLINK libspdk_bdev_iscsi.so 00:26:18.734 LIB libspdk_bdev_virtio.a 00:26:18.734 SO libspdk_bdev_virtio.so.6.0 00:26:18.734 SYMLINK libspdk_bdev_virtio.so 00:26:19.758 LIB libspdk_bdev_nvme.a 00:26:19.758 SO libspdk_bdev_nvme.so.7.1 00:26:19.758 SYMLINK libspdk_bdev_nvme.so 00:26:20.327 CC module/event/subsystems/iobuf/iobuf.o 00:26:20.327 CC module/event/subsystems/scheduler/scheduler.o 00:26:20.327 CC module/event/subsystems/vmd/vmd.o 00:26:20.327 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:20.327 CC module/event/subsystems/keyring/keyring.o 00:26:20.327 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:20.327 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:20.327 CC module/event/subsystems/fsdev/fsdev.o 00:26:20.327 CC module/event/subsystems/sock/sock.o 00:26:20.327 LIB libspdk_event_keyring.a 00:26:20.327 LIB libspdk_event_vmd.a 00:26:20.327 LIB libspdk_event_vhost_blk.a 00:26:20.327 LIB libspdk_event_scheduler.a 00:26:20.327 LIB libspdk_event_sock.a 00:26:20.327 SO libspdk_event_keyring.so.1.0 00:26:20.327 SO libspdk_event_vhost_blk.so.3.0 00:26:20.327 LIB libspdk_event_iobuf.a 00:26:20.327 SO libspdk_event_vmd.so.6.0 00:26:20.327 LIB libspdk_event_fsdev.a 00:26:20.327 SO libspdk_event_scheduler.so.4.0 00:26:20.327 SO libspdk_event_sock.so.5.0 00:26:20.327 SO libspdk_event_fsdev.so.1.0 00:26:20.327 SO libspdk_event_iobuf.so.3.0 00:26:20.327 SYMLINK libspdk_event_vhost_blk.so 00:26:20.327 SYMLINK libspdk_event_keyring.so 00:26:20.327 SYMLINK libspdk_event_scheduler.so 00:26:20.327 SYMLINK libspdk_event_vmd.so 00:26:20.327 SYMLINK libspdk_event_sock.so 00:26:20.588 SYMLINK libspdk_event_fsdev.so 00:26:20.588 SYMLINK libspdk_event_iobuf.so 00:26:20.588 CC module/event/subsystems/accel/accel.o 00:26:20.854 LIB libspdk_event_accel.a 00:26:20.854 SO libspdk_event_accel.so.6.0 00:26:20.854 SYMLINK libspdk_event_accel.so 00:26:21.113 CC module/event/subsystems/bdev/bdev.o 00:26:21.373 LIB libspdk_event_bdev.a 00:26:21.373 SO libspdk_event_bdev.so.6.0 00:26:21.373 SYMLINK libspdk_event_bdev.so 00:26:21.633 CC module/event/subsystems/nbd/nbd.o 00:26:21.633 CC module/event/subsystems/ublk/ublk.o 00:26:21.633 CC module/event/subsystems/scsi/scsi.o 00:26:21.633 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:21.633 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:21.633 LIB libspdk_event_nbd.a 00:26:21.633 LIB libspdk_event_ublk.a 00:26:21.633 LIB libspdk_event_scsi.a 00:26:21.633 SO libspdk_event_nbd.so.6.0 00:26:21.633 SO libspdk_event_ublk.so.3.0 00:26:21.634 SO libspdk_event_scsi.so.6.0 00:26:21.634 SYMLINK libspdk_event_nbd.so 00:26:21.634 SYMLINK libspdk_event_ublk.so 00:26:21.634 SYMLINK libspdk_event_scsi.so 00:26:21.894 LIB libspdk_event_nvmf.a 00:26:21.894 SO libspdk_event_nvmf.so.6.0 00:26:21.894 SYMLINK libspdk_event_nvmf.so 00:26:21.894 CC module/event/subsystems/iscsi/iscsi.o 00:26:21.895 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:22.156 LIB libspdk_event_iscsi.a 00:26:22.156 LIB libspdk_event_vhost_scsi.a 00:26:22.156 SO libspdk_event_iscsi.so.6.0 00:26:22.156 SO libspdk_event_vhost_scsi.so.3.0 00:26:22.156 SYMLINK libspdk_event_iscsi.so 00:26:22.156 SYMLINK libspdk_event_vhost_scsi.so 00:26:22.156 SO libspdk.so.6.0 00:26:22.416 SYMLINK libspdk.so 00:26:22.416 CXX app/trace/trace.o 00:26:22.416 TEST_HEADER include/spdk/accel.h 00:26:22.416 TEST_HEADER include/spdk/accel_module.h 00:26:22.416 TEST_HEADER include/spdk/assert.h 00:26:22.416 CC test/rpc_client/rpc_client_test.o 00:26:22.416 TEST_HEADER include/spdk/barrier.h 00:26:22.416 TEST_HEADER include/spdk/base64.h 00:26:22.416 TEST_HEADER include/spdk/bdev.h 00:26:22.416 TEST_HEADER include/spdk/bdev_module.h 00:26:22.416 TEST_HEADER include/spdk/bdev_zone.h 00:26:22.416 TEST_HEADER include/spdk/bit_array.h 00:26:22.416 TEST_HEADER include/spdk/bit_pool.h 00:26:22.416 TEST_HEADER include/spdk/blob_bdev.h 00:26:22.416 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:22.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:26:22.416 TEST_HEADER include/spdk/blobfs.h 00:26:22.416 TEST_HEADER include/spdk/blob.h 00:26:22.416 TEST_HEADER include/spdk/conf.h 00:26:22.416 TEST_HEADER include/spdk/config.h 00:26:22.416 TEST_HEADER include/spdk/cpuset.h 00:26:22.416 TEST_HEADER include/spdk/crc16.h 00:26:22.416 TEST_HEADER include/spdk/crc32.h 00:26:22.416 TEST_HEADER include/spdk/crc64.h 00:26:22.416 TEST_HEADER include/spdk/dif.h 00:26:22.416 TEST_HEADER include/spdk/dma.h 00:26:22.416 TEST_HEADER include/spdk/endian.h 00:26:22.416 TEST_HEADER include/spdk/env_dpdk.h 00:26:22.416 TEST_HEADER include/spdk/env.h 00:26:22.416 TEST_HEADER include/spdk/event.h 00:26:22.416 TEST_HEADER include/spdk/fd_group.h 00:26:22.416 TEST_HEADER include/spdk/fd.h 00:26:22.416 TEST_HEADER include/spdk/file.h 00:26:22.416 TEST_HEADER include/spdk/fsdev.h 00:26:22.416 TEST_HEADER include/spdk/fsdev_module.h 00:26:22.416 TEST_HEADER include/spdk/ftl.h 00:26:22.416 TEST_HEADER include/spdk/gpt_spec.h 00:26:22.416 CC examples/util/zipf/zipf.o 00:26:22.416 TEST_HEADER include/spdk/hexlify.h 00:26:22.416 CC examples/ioat/perf/perf.o 00:26:22.416 TEST_HEADER include/spdk/histogram_data.h 00:26:22.416 TEST_HEADER include/spdk/idxd.h 00:26:22.416 TEST_HEADER include/spdk/idxd_spec.h 00:26:22.416 TEST_HEADER include/spdk/init.h 00:26:22.416 TEST_HEADER include/spdk/ioat.h 00:26:22.416 TEST_HEADER include/spdk/ioat_spec.h 00:26:22.416 CC test/thread/poller_perf/poller_perf.o 00:26:22.416 TEST_HEADER include/spdk/iscsi_spec.h 00:26:22.416 TEST_HEADER include/spdk/json.h 00:26:22.677 TEST_HEADER include/spdk/jsonrpc.h 00:26:22.677 TEST_HEADER include/spdk/keyring.h 00:26:22.677 TEST_HEADER include/spdk/keyring_module.h 00:26:22.677 TEST_HEADER include/spdk/likely.h 00:26:22.677 TEST_HEADER include/spdk/log.h 00:26:22.677 TEST_HEADER include/spdk/lvol.h 00:26:22.677 TEST_HEADER include/spdk/md5.h 00:26:22.677 TEST_HEADER include/spdk/memory.h 00:26:22.677 TEST_HEADER include/spdk/mmio.h 00:26:22.677 CC test/dma/test_dma/test_dma.o 00:26:22.677 TEST_HEADER include/spdk/nbd.h 00:26:22.677 TEST_HEADER include/spdk/net.h 00:26:22.677 TEST_HEADER include/spdk/notify.h 00:26:22.677 TEST_HEADER include/spdk/nvme.h 00:26:22.677 TEST_HEADER include/spdk/nvme_intel.h 00:26:22.677 TEST_HEADER include/spdk/nvme_ocssd.h 00:26:22.677 CC test/env/mem_callbacks/mem_callbacks.o 00:26:22.677 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:26:22.677 TEST_HEADER include/spdk/nvme_spec.h 00:26:22.677 TEST_HEADER include/spdk/nvme_zns.h 00:26:22.677 TEST_HEADER include/spdk/nvmf_cmd.h 00:26:22.677 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:26:22.677 CC test/app/bdev_svc/bdev_svc.o 00:26:22.677 TEST_HEADER include/spdk/nvmf.h 00:26:22.677 TEST_HEADER include/spdk/nvmf_spec.h 00:26:22.677 TEST_HEADER include/spdk/nvmf_transport.h 00:26:22.677 TEST_HEADER include/spdk/opal.h 00:26:22.677 TEST_HEADER include/spdk/opal_spec.h 00:26:22.677 TEST_HEADER include/spdk/pci_ids.h 00:26:22.677 TEST_HEADER include/spdk/pipe.h 00:26:22.677 TEST_HEADER include/spdk/queue.h 00:26:22.677 TEST_HEADER include/spdk/reduce.h 00:26:22.677 TEST_HEADER include/spdk/rpc.h 00:26:22.677 TEST_HEADER include/spdk/scheduler.h 00:26:22.677 TEST_HEADER include/spdk/scsi.h 00:26:22.677 TEST_HEADER include/spdk/scsi_spec.h 00:26:22.677 TEST_HEADER include/spdk/sock.h 00:26:22.677 TEST_HEADER include/spdk/stdinc.h 00:26:22.677 TEST_HEADER include/spdk/string.h 00:26:22.677 TEST_HEADER include/spdk/thread.h 00:26:22.677 TEST_HEADER include/spdk/trace.h 00:26:22.677 TEST_HEADER include/spdk/trace_parser.h 00:26:22.677 TEST_HEADER include/spdk/tree.h 00:26:22.677 TEST_HEADER include/spdk/ublk.h 00:26:22.677 TEST_HEADER include/spdk/util.h 00:26:22.677 TEST_HEADER include/spdk/uuid.h 00:26:22.677 TEST_HEADER include/spdk/version.h 00:26:22.677 TEST_HEADER include/spdk/vfio_user_pci.h 00:26:22.677 LINK rpc_client_test 00:26:22.677 TEST_HEADER include/spdk/vfio_user_spec.h 00:26:22.677 TEST_HEADER include/spdk/vhost.h 00:26:22.677 TEST_HEADER include/spdk/vmd.h 00:26:22.677 LINK interrupt_tgt 00:26:22.677 TEST_HEADER include/spdk/xor.h 00:26:22.677 TEST_HEADER include/spdk/zipf.h 00:26:22.677 CXX test/cpp_headers/accel.o 00:26:22.677 LINK zipf 00:26:22.677 LINK poller_perf 00:26:22.677 LINK ioat_perf 00:26:22.677 LINK bdev_svc 00:26:22.938 LINK spdk_trace 00:26:22.938 CXX test/cpp_headers/accel_module.o 00:26:22.938 CC app/trace_record/trace_record.o 00:26:22.938 CC app/nvmf_tgt/nvmf_main.o 00:26:22.938 CC app/iscsi_tgt/iscsi_tgt.o 00:26:22.938 CC examples/ioat/verify/verify.o 00:26:22.938 CXX test/cpp_headers/assert.o 00:26:22.938 CC app/spdk_tgt/spdk_tgt.o 00:26:22.938 LINK test_dma 00:26:22.938 CC test/env/vtophys/vtophys.o 00:26:23.200 LINK mem_callbacks 00:26:23.200 LINK nvmf_tgt 00:26:23.200 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:23.200 CXX test/cpp_headers/barrier.o 00:26:23.200 LINK spdk_trace_record 00:26:23.200 LINK iscsi_tgt 00:26:23.200 LINK verify 00:26:23.200 LINK vtophys 00:26:23.200 CXX test/cpp_headers/base64.o 00:26:23.200 LINK spdk_tgt 00:26:23.200 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:23.200 CXX test/cpp_headers/bdev.o 00:26:23.200 CXX test/cpp_headers/bdev_module.o 00:26:23.460 CC test/app/histogram_perf/histogram_perf.o 00:26:23.460 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:26:23.460 CC test/env/memory/memory_ut.o 00:26:23.460 CC test/event/event_perf/event_perf.o 00:26:23.460 CC app/spdk_lspci/spdk_lspci.o 00:26:23.460 CC examples/thread/thread/thread_ex.o 00:26:23.460 CXX test/cpp_headers/bdev_zone.o 00:26:23.460 LINK env_dpdk_post_init 00:26:23.460 LINK histogram_perf 00:26:23.460 CC app/spdk_nvme_perf/perf.o 00:26:23.460 LINK nvme_fuzz 00:26:23.460 LINK spdk_lspci 00:26:23.728 LINK event_perf 00:26:23.728 CXX test/cpp_headers/bit_array.o 00:26:23.728 CC app/spdk_nvme_identify/identify.o 00:26:23.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:23.728 LINK thread 00:26:23.728 CC test/app/jsoncat/jsoncat.o 00:26:23.728 CC app/spdk_nvme_discover/discovery_aer.o 00:26:23.728 CXX test/cpp_headers/bit_pool.o 00:26:23.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:23.728 CC test/event/reactor/reactor.o 00:26:24.003 LINK jsoncat 00:26:24.003 CXX test/cpp_headers/blob_bdev.o 00:26:24.003 LINK reactor 00:26:24.003 LINK spdk_nvme_discover 00:26:24.003 CXX test/cpp_headers/blobfs_bdev.o 00:26:24.003 CC examples/sock/hello_world/hello_sock.o 00:26:24.264 CC test/event/reactor_perf/reactor_perf.o 00:26:24.264 CXX test/cpp_headers/blobfs.o 00:26:24.265 CC test/app/stub/stub.o 00:26:24.265 CC app/spdk_top/spdk_top.o 00:26:24.265 LINK vhost_fuzz 00:26:24.265 LINK hello_sock 00:26:24.265 LINK reactor_perf 00:26:24.265 CXX test/cpp_headers/blob.o 00:26:24.265 LINK stub 00:26:24.265 LINK spdk_nvme_perf 00:26:24.526 CXX test/cpp_headers/conf.o 00:26:24.526 CC app/vhost/vhost.o 00:26:24.526 CC test/event/app_repeat/app_repeat.o 00:26:24.526 LINK spdk_nvme_identify 00:26:24.526 CXX test/cpp_headers/config.o 00:26:24.526 LINK memory_ut 00:26:24.526 CC examples/vmd/lsvmd/lsvmd.o 00:26:24.526 CC examples/vmd/led/led.o 00:26:24.526 CXX test/cpp_headers/cpuset.o 00:26:24.526 LINK app_repeat 00:26:24.526 CC app/spdk_dd/spdk_dd.o 00:26:24.526 CXX test/cpp_headers/crc16.o 00:26:24.526 LINK lsvmd 00:26:24.526 LINK vhost 00:26:24.785 LINK led 00:26:24.785 CC test/env/pci/pci_ut.o 00:26:24.785 CXX test/cpp_headers/crc32.o 00:26:24.785 CXX test/cpp_headers/crc64.o 00:26:24.785 CC app/fio/nvme/fio_plugin.o 00:26:24.785 CC test/event/scheduler/scheduler.o 00:26:24.785 CXX test/cpp_headers/dif.o 00:26:25.043 CC examples/idxd/perf/perf.o 00:26:25.043 LINK spdk_dd 00:26:25.043 CC examples/fsdev/hello_world/hello_fsdev.o 00:26:25.043 LINK iscsi_fuzz 00:26:25.043 CC test/nvme/aer/aer.o 00:26:25.043 CXX test/cpp_headers/dma.o 00:26:25.043 LINK spdk_top 00:26:25.043 LINK scheduler 00:26:25.043 CXX test/cpp_headers/endian.o 00:26:25.043 LINK pci_ut 00:26:25.303 CXX test/cpp_headers/env_dpdk.o 00:26:25.303 CXX test/cpp_headers/env.o 00:26:25.303 LINK hello_fsdev 00:26:25.303 CXX test/cpp_headers/event.o 00:26:25.303 LINK idxd_perf 00:26:25.303 CXX test/cpp_headers/fd_group.o 00:26:25.303 CXX test/cpp_headers/fd.o 00:26:25.303 LINK aer 00:26:25.303 CC app/fio/bdev/fio_plugin.o 00:26:25.303 LINK spdk_nvme 00:26:25.303 CXX test/cpp_headers/file.o 00:26:25.303 CXX test/cpp_headers/fsdev.o 00:26:25.564 CC examples/accel/perf/accel_perf.o 00:26:25.564 CXX test/cpp_headers/fsdev_module.o 00:26:25.564 CXX test/cpp_headers/ftl.o 00:26:25.564 CXX test/cpp_headers/gpt_spec.o 00:26:25.564 CC test/nvme/reset/reset.o 00:26:25.564 CXX test/cpp_headers/hexlify.o 00:26:25.564 CC test/nvme/sgl/sgl.o 00:26:25.564 CC test/accel/dif/dif.o 00:26:25.564 CC test/nvme/e2edp/nvme_dp.o 00:26:25.564 CC test/nvme/overhead/overhead.o 00:26:25.824 CXX test/cpp_headers/histogram_data.o 00:26:25.824 CC test/nvme/err_injection/err_injection.o 00:26:25.824 CC test/nvme/startup/startup.o 00:26:25.824 LINK reset 00:26:25.824 CXX test/cpp_headers/idxd.o 00:26:25.824 LINK sgl 00:26:25.824 LINK spdk_bdev 00:26:25.824 LINK startup 00:26:25.824 LINK err_injection 00:26:25.824 CXX test/cpp_headers/idxd_spec.o 00:26:25.824 LINK nvme_dp 00:26:25.824 LINK accel_perf 00:26:26.085 CXX test/cpp_headers/init.o 00:26:26.085 LINK overhead 00:26:26.085 CXX test/cpp_headers/ioat.o 00:26:26.085 CXX test/cpp_headers/ioat_spec.o 00:26:26.085 CC test/nvme/reserve/reserve.o 00:26:26.085 CC test/nvme/simple_copy/simple_copy.o 00:26:26.085 CXX test/cpp_headers/iscsi_spec.o 00:26:26.085 CC test/nvme/connect_stress/connect_stress.o 00:26:26.085 CC test/nvme/boot_partition/boot_partition.o 00:26:26.085 CC examples/blob/hello_world/hello_blob.o 00:26:26.085 CC examples/blob/cli/blobcli.o 00:26:26.085 CXX test/cpp_headers/json.o 00:26:26.346 LINK reserve 00:26:26.346 LINK connect_stress 00:26:26.346 LINK simple_copy 00:26:26.346 CC examples/nvme/hello_world/hello_world.o 00:26:26.346 LINK boot_partition 00:26:26.346 LINK dif 00:26:26.346 CXX test/cpp_headers/jsonrpc.o 00:26:26.346 CC test/blobfs/mkfs/mkfs.o 00:26:26.346 CXX test/cpp_headers/keyring.o 00:26:26.346 CXX test/cpp_headers/keyring_module.o 00:26:26.346 LINK hello_blob 00:26:26.346 CXX test/cpp_headers/likely.o 00:26:26.607 CXX test/cpp_headers/log.o 00:26:26.607 LINK hello_world 00:26:26.607 CC test/nvme/compliance/nvme_compliance.o 00:26:26.607 CXX test/cpp_headers/lvol.o 00:26:26.607 LINK mkfs 00:26:26.607 CXX test/cpp_headers/md5.o 00:26:26.607 CXX test/cpp_headers/memory.o 00:26:26.607 CC test/bdev/bdevio/bdevio.o 00:26:26.607 LINK blobcli 00:26:26.607 CXX test/cpp_headers/mmio.o 00:26:26.607 CC test/lvol/esnap/esnap.o 00:26:26.607 CXX test/cpp_headers/nbd.o 00:26:26.867 CC examples/nvme/reconnect/reconnect.o 00:26:26.867 CC test/nvme/fused_ordering/fused_ordering.o 00:26:26.867 CXX test/cpp_headers/net.o 00:26:26.867 CC examples/bdev/hello_world/hello_bdev.o 00:26:26.867 LINK nvme_compliance 00:26:26.867 CC examples/bdev/bdevperf/bdevperf.o 00:26:26.867 CC test/nvme/doorbell_aers/doorbell_aers.o 00:26:26.867 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:26.867 CXX test/cpp_headers/notify.o 00:26:26.867 LINK fused_ordering 00:26:27.128 LINK bdevio 00:26:27.128 LINK hello_bdev 00:26:27.128 CXX test/cpp_headers/nvme.o 00:26:27.128 LINK doorbell_aers 00:26:27.128 CC test/nvme/fdp/fdp.o 00:26:27.128 LINK reconnect 00:26:27.128 CC test/nvme/cuse/cuse.o 00:26:27.128 CXX test/cpp_headers/nvme_intel.o 00:26:27.388 CC examples/nvme/arbitration/arbitration.o 00:26:27.388 CC examples/nvme/hotplug/hotplug.o 00:26:27.388 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:27.388 CC examples/nvme/abort/abort.o 00:26:27.388 CXX test/cpp_headers/nvme_ocssd.o 00:26:27.388 LINK fdp 00:26:27.388 LINK nvme_manage 00:26:27.388 LINK cmb_copy 00:26:27.388 CXX test/cpp_headers/nvme_ocssd_spec.o 00:26:27.388 LINK hotplug 00:26:27.648 CXX test/cpp_headers/nvme_spec.o 00:26:27.649 CXX test/cpp_headers/nvme_zns.o 00:26:27.649 LINK arbitration 00:26:27.649 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:27.649 CXX test/cpp_headers/nvmf_cmd.o 00:26:27.649 CXX test/cpp_headers/nvmf_fc_spec.o 00:26:27.649 LINK abort 00:26:27.649 CXX test/cpp_headers/nvmf.o 00:26:27.649 CXX test/cpp_headers/nvmf_spec.o 00:26:27.649 LINK bdevperf 00:26:27.649 CXX test/cpp_headers/nvmf_transport.o 00:26:27.649 LINK pmr_persistence 00:26:27.910 CXX test/cpp_headers/opal.o 00:26:27.910 CXX test/cpp_headers/opal_spec.o 00:26:27.910 CXX test/cpp_headers/pci_ids.o 00:26:27.910 CXX test/cpp_headers/pipe.o 00:26:27.910 CXX test/cpp_headers/queue.o 00:26:27.910 CXX test/cpp_headers/reduce.o 00:26:27.910 CXX test/cpp_headers/rpc.o 00:26:27.910 CXX test/cpp_headers/scheduler.o 00:26:27.910 CXX test/cpp_headers/scsi.o 00:26:27.910 CXX test/cpp_headers/scsi_spec.o 00:26:27.910 CXX test/cpp_headers/sock.o 00:26:27.910 CXX test/cpp_headers/stdinc.o 00:26:27.910 CXX test/cpp_headers/string.o 00:26:27.910 CXX test/cpp_headers/thread.o 00:26:28.172 CXX test/cpp_headers/trace.o 00:26:28.172 CXX test/cpp_headers/trace_parser.o 00:26:28.172 CC examples/nvmf/nvmf/nvmf.o 00:26:28.172 CXX test/cpp_headers/tree.o 00:26:28.172 CXX test/cpp_headers/ublk.o 00:26:28.172 CXX test/cpp_headers/util.o 00:26:28.172 CXX test/cpp_headers/uuid.o 00:26:28.172 CXX test/cpp_headers/version.o 00:26:28.172 CXX test/cpp_headers/vfio_user_pci.o 00:26:28.172 CXX test/cpp_headers/vfio_user_spec.o 00:26:28.172 CXX test/cpp_headers/vhost.o 00:26:28.172 CXX test/cpp_headers/vmd.o 00:26:28.172 CXX test/cpp_headers/xor.o 00:26:28.172 CXX test/cpp_headers/zipf.o 00:26:28.434 LINK nvmf 00:26:28.434 LINK cuse 00:26:31.727 LINK esnap 00:26:31.727 00:26:31.727 real 1m10.573s 00:26:31.727 user 6m29.443s 00:26:31.727 sys 1m6.468s 00:26:31.727 23:08:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:31.727 ************************************ 00:26:31.727 END TEST make 00:26:31.727 ************************************ 00:26:31.727 23:08:12 make -- common/autotest_common.sh@10 -- $ set +x 00:26:31.727 23:08:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:26:31.727 23:08:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:31.727 23:08:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:31.727 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:31.727 23:08:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:31.727 23:08:12 -- pm/common@44 -- $ pid=5063 00:26:31.727 23:08:12 -- pm/common@50 -- $ kill -TERM 5063 00:26:31.727 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:31.727 23:08:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:31.727 23:08:12 -- pm/common@44 -- $ pid=5064 00:26:31.727 23:08:12 -- pm/common@50 -- $ kill -TERM 5064 00:26:31.727 23:08:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:26:31.727 23:08:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:31.727 23:08:12 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:31.727 23:08:12 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:31.727 23:08:12 -- common/autotest_common.sh@1711 -- # lcov --version 00:26:31.989 23:08:12 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:31.989 23:08:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.989 23:08:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.989 23:08:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.989 23:08:12 -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.989 23:08:12 -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.989 23:08:12 -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.989 23:08:12 -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.989 23:08:12 -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.989 23:08:12 -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.989 23:08:12 -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.989 23:08:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.989 23:08:12 -- scripts/common.sh@344 -- # case "$op" in 00:26:31.989 23:08:12 -- scripts/common.sh@345 -- # : 1 00:26:31.989 23:08:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.989 23:08:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.989 23:08:12 -- scripts/common.sh@365 -- # decimal 1 00:26:31.989 23:08:12 -- scripts/common.sh@353 -- # local d=1 00:26:31.989 23:08:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.989 23:08:12 -- scripts/common.sh@355 -- # echo 1 00:26:31.989 23:08:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.989 23:08:12 -- scripts/common.sh@366 -- # decimal 2 00:26:31.989 23:08:12 -- scripts/common.sh@353 -- # local d=2 00:26:31.989 23:08:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.989 23:08:12 -- scripts/common.sh@355 -- # echo 2 00:26:31.989 23:08:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.989 23:08:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.989 23:08:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.989 23:08:12 -- scripts/common.sh@368 -- # return 0 00:26:31.989 23:08:12 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.989 23:08:12 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.989 --rc genhtml_branch_coverage=1 00:26:31.989 --rc genhtml_function_coverage=1 00:26:31.989 --rc genhtml_legend=1 00:26:31.989 --rc geninfo_all_blocks=1 00:26:31.989 --rc geninfo_unexecuted_blocks=1 00:26:31.989 00:26:31.989 ' 00:26:31.989 23:08:12 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.989 --rc genhtml_branch_coverage=1 00:26:31.989 --rc genhtml_function_coverage=1 00:26:31.989 --rc genhtml_legend=1 00:26:31.989 --rc geninfo_all_blocks=1 00:26:31.989 --rc geninfo_unexecuted_blocks=1 00:26:31.989 00:26:31.989 ' 00:26:31.989 23:08:12 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.989 --rc genhtml_branch_coverage=1 00:26:31.989 --rc genhtml_function_coverage=1 00:26:31.989 --rc genhtml_legend=1 00:26:31.989 --rc geninfo_all_blocks=1 00:26:31.989 --rc geninfo_unexecuted_blocks=1 00:26:31.989 00:26:31.989 ' 00:26:31.989 23:08:12 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.989 --rc genhtml_branch_coverage=1 00:26:31.989 --rc genhtml_function_coverage=1 00:26:31.989 --rc genhtml_legend=1 00:26:31.989 --rc geninfo_all_blocks=1 00:26:31.989 --rc geninfo_unexecuted_blocks=1 00:26:31.989 00:26:31.989 ' 00:26:31.989 23:08:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.989 23:08:12 -- nvmf/common.sh@7 -- # uname -s 00:26:31.989 23:08:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.989 23:08:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.989 23:08:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.989 23:08:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.989 23:08:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.989 23:08:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.989 23:08:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.989 23:08:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.989 23:08:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.989 23:08:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.989 23:08:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee621cbe-db37-404e-aebf-629496038471 00:26:31.989 23:08:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee621cbe-db37-404e-aebf-629496038471 00:26:31.989 23:08:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.989 23:08:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.989 23:08:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:31.989 23:08:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.989 23:08:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.989 23:08:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.989 23:08:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.989 23:08:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.989 23:08:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.989 23:08:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.989 23:08:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.989 23:08:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.989 23:08:12 -- paths/export.sh@5 -- # export PATH 00:26:31.989 23:08:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.989 23:08:12 -- nvmf/common.sh@51 -- # : 0 00:26:31.989 23:08:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.989 23:08:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.989 23:08:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.989 23:08:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.989 23:08:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.989 23:08:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.989 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.989 23:08:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.989 23:08:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.989 23:08:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.989 23:08:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:26:31.989 23:08:12 -- spdk/autotest.sh@32 -- # uname -s 00:26:31.989 23:08:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:26:31.990 23:08:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:26:31.990 23:08:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:31.990 23:08:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:26:31.990 23:08:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:31.990 23:08:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:26:31.990 23:08:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:26:31.990 23:08:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:26:31.990 23:08:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54264 00:26:31.990 23:08:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:26:31.990 23:08:12 -- pm/common@17 -- # local monitor 00:26:31.990 23:08:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:31.990 23:08:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:31.990 23:08:12 -- pm/common@25 -- # sleep 1 00:26:31.990 23:08:12 -- pm/common@21 -- # date +%s 00:26:31.990 23:08:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:26:31.990 23:08:12 -- pm/common@21 -- # date +%s 00:26:31.990 23:08:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785692 00:26:31.990 23:08:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785692 00:26:31.990 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785692_collect-cpu-load.pm.log 00:26:31.990 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785692_collect-vmstat.pm.log 00:26:33.372 23:08:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:26:33.372 23:08:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:26:33.372 23:08:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.372 23:08:13 -- common/autotest_common.sh@10 -- # set +x 00:26:33.372 23:08:13 -- spdk/autotest.sh@59 -- # create_test_list 00:26:33.372 23:08:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:26:33.372 23:08:13 -- common/autotest_common.sh@10 -- # set +x 00:26:33.372 23:08:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:26:33.372 23:08:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:26:33.372 23:08:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:26:33.372 23:08:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:26:33.372 23:08:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:26:33.372 23:08:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:26:33.372 23:08:13 -- common/autotest_common.sh@1457 -- # uname 00:26:33.372 23:08:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:26:33.372 23:08:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:26:33.372 23:08:13 -- common/autotest_common.sh@1477 -- # uname 00:26:33.372 23:08:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:26:33.372 23:08:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:26:33.372 23:08:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:26:33.372 lcov: LCOV version 1.15 00:26:33.372 23:08:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:26:48.282 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:26:48.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:27:03.201 23:08:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:27:03.201 23:08:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:03.201 23:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.201 23:08:42 -- spdk/autotest.sh@78 -- # rm -f 00:27:03.201 23:08:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:03.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.201 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:03.201 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:03.201 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:03.201 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:03.201 23:08:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:27:03.201 23:08:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:27:03.201 23:08:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:27:03.201 23:08:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:27:03.201 23:08:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:27:03.201 23:08:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:27:03.201 23:08:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:27:03.201 23:08:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:27:03.201 23:08:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:27:03.201 23:08:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:27:03.201 23:08:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112452 s, 93.2 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377569 s, 278 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450619 s, 233 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515854 s, 203 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00355096 s, 295 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:03.201 23:08:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:03.201 23:08:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:27:03.201 23:08:43 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:27:03.201 23:08:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:27:03.201 No valid GPT data, bailing 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:27:03.201 23:08:43 -- scripts/common.sh@394 -- # pt= 00:27:03.201 23:08:43 -- scripts/common.sh@395 -- # return 1 00:27:03.201 23:08:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:27:03.201 1+0 records in 00:27:03.201 1+0 records out 00:27:03.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445427 s, 235 MB/s 00:27:03.201 23:08:43 -- spdk/autotest.sh@105 -- # sync 00:27:03.461 23:08:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:27:03.461 23:08:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:27:03.461 23:08:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:27:04.844 23:08:45 -- spdk/autotest.sh@111 -- # uname -s 00:27:04.844 23:08:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:27:04.844 23:08:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:27:04.844 23:08:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:05.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:05.678 Hugepages 00:27:05.678 node hugesize free / total 00:27:05.678 node0 1048576kB 0 / 0 00:27:05.678 node0 2048kB 0 / 0 00:27:05.678 00:27:05.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:05.678 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:05.678 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:05.678 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:27:05.939 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:27:05.939 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:27:05.939 23:08:46 -- spdk/autotest.sh@117 -- # uname -s 00:27:05.939 23:08:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:27:05.939 23:08:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:27:05.939 23:08:46 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:06.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.793 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.793 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.793 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.793 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.793 23:08:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:27:08.187 23:08:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:27:08.187 23:08:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:27:08.187 23:08:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:27:08.187 23:08:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:27:08.187 23:08:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:08.187 23:08:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:08.187 23:08:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:08.187 23:08:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:08.187 23:08:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:08.187 23:08:48 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:08.187 23:08:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:08.187 23:08:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:08.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:08.453 Waiting for block devices as requested 00:27:08.453 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:08.453 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:08.453 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:08.453 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.802 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:13.802 23:08:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:13.802 23:08:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1543 -- # continue 00:27:13.802 23:08:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:13.802 23:08:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1543 -- # continue 00:27:13.802 23:08:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:13.802 23:08:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1543 -- # continue 00:27:13.802 23:08:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:27:13.802 23:08:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:13.802 23:08:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:27:13.802 23:08:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:13.802 23:08:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:13.802 23:08:54 -- common/autotest_common.sh@1543 -- # continue 00:27:13.802 23:08:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:27:13.802 23:08:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.802 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:27:13.802 23:08:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:27:13.802 23:08:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.802 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:27:13.802 23:08:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:14.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:14.684 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:14.685 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:14.685 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:14.685 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:14.685 23:08:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:27:14.685 23:08:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.685 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:14.685 23:08:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:27:14.685 23:08:55 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:27:14.685 23:08:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:27:14.685 23:08:55 -- common/autotest_common.sh@1563 -- # bdfs=() 00:27:14.685 23:08:55 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:27:14.685 23:08:55 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:27:14.685 23:08:55 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:27:14.685 23:08:55 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:27:14.685 23:08:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:14.685 23:08:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:14.685 23:08:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:14.685 23:08:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:14.685 23:08:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:14.685 23:08:55 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:14.685 23:08:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:14.685 23:08:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:14.685 23:08:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:14.685 23:08:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:14.685 23:08:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:14.685 23:08:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:14.685 23:08:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:14.685 23:08:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:27:14.685 23:08:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:14.685 23:08:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:14.685 23:08:55 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:27:14.685 23:08:55 -- common/autotest_common.sh@1572 -- # return 0 00:27:14.685 23:08:55 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:27:14.685 23:08:55 -- common/autotest_common.sh@1580 -- # return 0 00:27:14.685 23:08:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:27:14.685 23:08:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:27:14.685 23:08:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:14.685 23:08:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:14.685 23:08:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:27:14.685 23:08:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.685 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:14.685 23:08:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:27:14.685 23:08:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:14.685 23:08:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.685 23:08:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.685 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:14.685 ************************************ 00:27:14.685 START TEST env 00:27:14.685 ************************************ 00:27:14.685 23:08:55 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:14.950 * Looking for test storage... 00:27:14.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1711 -- # lcov --version 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:14.950 23:08:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.950 23:08:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.950 23:08:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.950 23:08:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.950 23:08:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.950 23:08:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.950 23:08:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.950 23:08:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.950 23:08:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.950 23:08:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.950 23:08:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.950 23:08:55 env -- scripts/common.sh@344 -- # case "$op" in 00:27:14.950 23:08:55 env -- scripts/common.sh@345 -- # : 1 00:27:14.950 23:08:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.950 23:08:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.950 23:08:55 env -- scripts/common.sh@365 -- # decimal 1 00:27:14.950 23:08:55 env -- scripts/common.sh@353 -- # local d=1 00:27:14.950 23:08:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.950 23:08:55 env -- scripts/common.sh@355 -- # echo 1 00:27:14.950 23:08:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.950 23:08:55 env -- scripts/common.sh@366 -- # decimal 2 00:27:14.950 23:08:55 env -- scripts/common.sh@353 -- # local d=2 00:27:14.950 23:08:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.950 23:08:55 env -- scripts/common.sh@355 -- # echo 2 00:27:14.950 23:08:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.950 23:08:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.950 23:08:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.950 23:08:55 env -- scripts/common.sh@368 -- # return 0 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:14.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.950 --rc genhtml_branch_coverage=1 00:27:14.950 --rc genhtml_function_coverage=1 00:27:14.950 --rc genhtml_legend=1 00:27:14.950 --rc geninfo_all_blocks=1 00:27:14.950 --rc geninfo_unexecuted_blocks=1 00:27:14.950 00:27:14.950 ' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:14.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.950 --rc genhtml_branch_coverage=1 00:27:14.950 --rc genhtml_function_coverage=1 00:27:14.950 --rc genhtml_legend=1 00:27:14.950 --rc geninfo_all_blocks=1 00:27:14.950 --rc geninfo_unexecuted_blocks=1 00:27:14.950 00:27:14.950 ' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:14.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.950 --rc genhtml_branch_coverage=1 00:27:14.950 --rc genhtml_function_coverage=1 00:27:14.950 --rc genhtml_legend=1 00:27:14.950 --rc geninfo_all_blocks=1 00:27:14.950 --rc geninfo_unexecuted_blocks=1 00:27:14.950 00:27:14.950 ' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:14.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.950 --rc genhtml_branch_coverage=1 00:27:14.950 --rc genhtml_function_coverage=1 00:27:14.950 --rc genhtml_legend=1 00:27:14.950 --rc geninfo_all_blocks=1 00:27:14.950 --rc geninfo_unexecuted_blocks=1 00:27:14.950 00:27:14.950 ' 00:27:14.950 23:08:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.950 23:08:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.950 23:08:55 env -- common/autotest_common.sh@10 -- # set +x 00:27:14.950 ************************************ 00:27:14.950 START TEST env_memory 00:27:14.950 ************************************ 00:27:14.950 23:08:55 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:14.950 00:27:14.950 00:27:14.950 CUnit - A unit testing framework for C - Version 2.1-3 00:27:14.950 http://cunit.sourceforge.net/ 00:27:14.950 00:27:14.950 00:27:14.950 Suite: memory 00:27:14.950 Test: alloc and free memory map ...[2024-12-09 23:08:55.508531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:27:14.950 passed 00:27:14.950 Test: mem map translation ...[2024-12-09 23:08:55.549905] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:27:14.950 [2024-12-09 23:08:55.550071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:27:14.950 [2024-12-09 23:08:55.550196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:27:14.950 [2024-12-09 23:08:55.550261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:27:15.210 passed 00:27:15.210 Test: mem map registration ...[2024-12-09 23:08:55.618760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:27:15.210 [2024-12-09 23:08:55.618893] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:27:15.210 passed 00:27:15.210 Test: mem map adjacent registrations ...passed 00:27:15.210 00:27:15.210 Run Summary: Type Total Ran Passed Failed Inactive 00:27:15.210 suites 1 1 n/a 0 0 00:27:15.210 tests 4 4 4 0 0 00:27:15.210 asserts 152 152 152 0 n/a 00:27:15.210 00:27:15.210 Elapsed time = 0.236 seconds 00:27:15.210 00:27:15.210 real 0m0.266s 00:27:15.210 user 0m0.242s 00:27:15.210 sys 0m0.017s 00:27:15.210 ************************************ 00:27:15.210 END TEST env_memory 00:27:15.210 ************************************ 00:27:15.210 23:08:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.210 23:08:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:27:15.210 23:08:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:15.210 23:08:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:15.210 23:08:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.210 23:08:55 env -- common/autotest_common.sh@10 -- # set +x 00:27:15.210 ************************************ 00:27:15.210 START TEST env_vtophys 00:27:15.210 ************************************ 00:27:15.210 23:08:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:15.210 EAL: lib.eal log level changed from notice to debug 00:27:15.210 EAL: Detected lcore 0 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 1 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 2 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 3 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 4 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 5 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 6 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 7 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 8 as core 0 on socket 0 00:27:15.210 EAL: Detected lcore 9 as core 0 on socket 0 00:27:15.210 EAL: Maximum logical cores by configuration: 128 00:27:15.210 EAL: Detected CPU lcores: 10 00:27:15.210 EAL: Detected NUMA nodes: 1 00:27:15.210 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:27:15.210 EAL: Detected shared linkage of DPDK 00:27:15.210 EAL: No shared files mode enabled, IPC will be disabled 00:27:15.210 EAL: Selected IOVA mode 'PA' 00:27:15.210 EAL: Probing VFIO support... 00:27:15.210 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:15.210 EAL: VFIO modules not loaded, skipping VFIO support... 00:27:15.210 EAL: Ask a virtual area of 0x2e000 bytes 00:27:15.210 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:27:15.210 EAL: Setting up physically contiguous memory... 00:27:15.210 EAL: Setting maximum number of open files to 524288 00:27:15.210 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:27:15.210 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:27:15.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:15.210 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:27:15.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:15.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:15.210 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:27:15.210 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:27:15.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:15.210 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:27:15.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:15.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:15.210 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:27:15.210 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:27:15.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:15.210 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:27:15.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:15.211 EAL: Ask a virtual area of 0x400000000 bytes 00:27:15.211 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:27:15.211 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:27:15.211 EAL: Ask a virtual area of 0x61000 bytes 00:27:15.211 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:27:15.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:15.211 EAL: Ask a virtual area of 0x400000000 bytes 00:27:15.211 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:27:15.211 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:27:15.211 EAL: Hugepages will be freed exactly as allocated. 00:27:15.211 EAL: No shared files mode enabled, IPC is disabled 00:27:15.211 EAL: No shared files mode enabled, IPC is disabled 00:27:15.471 EAL: TSC frequency is ~2600000 KHz 00:27:15.471 EAL: Main lcore 0 is ready (tid=7fae7eec9a40;cpuset=[0]) 00:27:15.471 EAL: Trying to obtain current memory policy. 00:27:15.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.471 EAL: Restoring previous memory policy: 0 00:27:15.471 EAL: request: mp_malloc_sync 00:27:15.471 EAL: No shared files mode enabled, IPC is disabled 00:27:15.471 EAL: Heap on socket 0 was expanded by 2MB 00:27:15.471 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:15.471 EAL: No PCI address specified using 'addr=' in: bus=pci 00:27:15.471 EAL: Mem event callback 'spdk:(nil)' registered 00:27:15.471 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:27:15.471 00:27:15.471 00:27:15.471 CUnit - A unit testing framework for C - Version 2.1-3 00:27:15.471 http://cunit.sourceforge.net/ 00:27:15.471 00:27:15.471 00:27:15.471 Suite: components_suite 00:27:15.730 Test: vtophys_malloc_test ...passed 00:27:15.730 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:27:15.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.730 EAL: Restoring previous memory policy: 4 00:27:15.730 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.730 EAL: request: mp_malloc_sync 00:27:15.730 EAL: No shared files mode enabled, IPC is disabled 00:27:15.730 EAL: Heap on socket 0 was expanded by 4MB 00:27:15.730 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.730 EAL: request: mp_malloc_sync 00:27:15.730 EAL: No shared files mode enabled, IPC is disabled 00:27:15.730 EAL: Heap on socket 0 was shrunk by 4MB 00:27:15.730 EAL: Trying to obtain current memory policy. 00:27:15.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.730 EAL: Restoring previous memory policy: 4 00:27:15.730 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.730 EAL: request: mp_malloc_sync 00:27:15.730 EAL: No shared files mode enabled, IPC is disabled 00:27:15.730 EAL: Heap on socket 0 was expanded by 6MB 00:27:15.730 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.730 EAL: request: mp_malloc_sync 00:27:15.730 EAL: No shared files mode enabled, IPC is disabled 00:27:15.730 EAL: Heap on socket 0 was shrunk by 6MB 00:27:15.730 EAL: Trying to obtain current memory policy. 00:27:15.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.730 EAL: Restoring previous memory policy: 4 00:27:15.731 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.731 EAL: request: mp_malloc_sync 00:27:15.731 EAL: No shared files mode enabled, IPC is disabled 00:27:15.731 EAL: Heap on socket 0 was expanded by 10MB 00:27:15.731 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.731 EAL: request: mp_malloc_sync 00:27:15.731 EAL: No shared files mode enabled, IPC is disabled 00:27:15.731 EAL: Heap on socket 0 was shrunk by 10MB 00:27:15.731 EAL: Trying to obtain current memory policy. 00:27:15.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.731 EAL: Restoring previous memory policy: 4 00:27:15.731 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.731 EAL: request: mp_malloc_sync 00:27:15.731 EAL: No shared files mode enabled, IPC is disabled 00:27:15.731 EAL: Heap on socket 0 was expanded by 18MB 00:27:15.731 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.731 EAL: request: mp_malloc_sync 00:27:15.731 EAL: No shared files mode enabled, IPC is disabled 00:27:15.731 EAL: Heap on socket 0 was shrunk by 18MB 00:27:15.731 EAL: Trying to obtain current memory policy. 00:27:15.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.731 EAL: Restoring previous memory policy: 4 00:27:15.731 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.731 EAL: request: mp_malloc_sync 00:27:15.731 EAL: No shared files mode enabled, IPC is disabled 00:27:15.731 EAL: Heap on socket 0 was expanded by 34MB 00:27:15.993 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.993 EAL: request: mp_malloc_sync 00:27:15.993 EAL: No shared files mode enabled, IPC is disabled 00:27:15.993 EAL: Heap on socket 0 was shrunk by 34MB 00:27:15.993 EAL: Trying to obtain current memory policy. 00:27:15.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.993 EAL: Restoring previous memory policy: 4 00:27:15.993 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.993 EAL: request: mp_malloc_sync 00:27:15.993 EAL: No shared files mode enabled, IPC is disabled 00:27:15.993 EAL: Heap on socket 0 was expanded by 66MB 00:27:15.993 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.993 EAL: request: mp_malloc_sync 00:27:15.993 EAL: No shared files mode enabled, IPC is disabled 00:27:15.993 EAL: Heap on socket 0 was shrunk by 66MB 00:27:16.253 EAL: Trying to obtain current memory policy. 00:27:16.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:16.253 EAL: Restoring previous memory policy: 4 00:27:16.253 EAL: Calling mem event callback 'spdk:(nil)' 00:27:16.253 EAL: request: mp_malloc_sync 00:27:16.253 EAL: No shared files mode enabled, IPC is disabled 00:27:16.253 EAL: Heap on socket 0 was expanded by 130MB 00:27:16.253 EAL: Calling mem event callback 'spdk:(nil)' 00:27:16.253 EAL: request: mp_malloc_sync 00:27:16.253 EAL: No shared files mode enabled, IPC is disabled 00:27:16.253 EAL: Heap on socket 0 was shrunk by 130MB 00:27:16.511 EAL: Trying to obtain current memory policy. 00:27:16.511 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:16.511 EAL: Restoring previous memory policy: 4 00:27:16.511 EAL: Calling mem event callback 'spdk:(nil)' 00:27:16.511 EAL: request: mp_malloc_sync 00:27:16.511 EAL: No shared files mode enabled, IPC is disabled 00:27:16.511 EAL: Heap on socket 0 was expanded by 258MB 00:27:16.772 EAL: Calling mem event callback 'spdk:(nil)' 00:27:16.772 EAL: request: mp_malloc_sync 00:27:16.772 EAL: No shared files mode enabled, IPC is disabled 00:27:16.772 EAL: Heap on socket 0 was shrunk by 258MB 00:27:17.046 EAL: Trying to obtain current memory policy. 00:27:17.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:17.046 EAL: Restoring previous memory policy: 4 00:27:17.046 EAL: Calling mem event callback 'spdk:(nil)' 00:27:17.046 EAL: request: mp_malloc_sync 00:27:17.046 EAL: No shared files mode enabled, IPC is disabled 00:27:17.046 EAL: Heap on socket 0 was expanded by 514MB 00:27:17.668 EAL: Calling mem event callback 'spdk:(nil)' 00:27:17.668 EAL: request: mp_malloc_sync 00:27:17.668 EAL: No shared files mode enabled, IPC is disabled 00:27:17.668 EAL: Heap on socket 0 was shrunk by 514MB 00:27:18.239 EAL: Trying to obtain current memory policy. 00:27:18.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:18.500 EAL: Restoring previous memory policy: 4 00:27:18.500 EAL: Calling mem event callback 'spdk:(nil)' 00:27:18.500 EAL: request: mp_malloc_sync 00:27:18.500 EAL: No shared files mode enabled, IPC is disabled 00:27:18.500 EAL: Heap on socket 0 was expanded by 1026MB 00:27:19.442 EAL: Calling mem event callback 'spdk:(nil)' 00:27:19.442 EAL: request: mp_malloc_sync 00:27:19.442 EAL: No shared files mode enabled, IPC is disabled 00:27:19.442 EAL: Heap on socket 0 was shrunk by 1026MB 00:27:20.823 passed 00:27:20.823 00:27:20.823 Run Summary: Type Total Ran Passed Failed Inactive 00:27:20.823 suites 1 1 n/a 0 0 00:27:20.823 tests 2 2 2 0 0 00:27:20.823 asserts 5845 5845 5845 0 n/a 00:27:20.823 00:27:20.823 Elapsed time = 5.106 seconds 00:27:20.823 EAL: Calling mem event callback 'spdk:(nil)' 00:27:20.823 EAL: request: mp_malloc_sync 00:27:20.823 EAL: No shared files mode enabled, IPC is disabled 00:27:20.823 EAL: Heap on socket 0 was shrunk by 2MB 00:27:20.823 EAL: No shared files mode enabled, IPC is disabled 00:27:20.823 EAL: No shared files mode enabled, IPC is disabled 00:27:20.823 EAL: No shared files mode enabled, IPC is disabled 00:27:20.823 00:27:20.823 real 0m5.371s 00:27:20.823 user 0m4.570s 00:27:20.823 sys 0m0.650s 00:27:20.823 23:09:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.823 23:09:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:27:20.823 ************************************ 00:27:20.823 END TEST env_vtophys 00:27:20.823 ************************************ 00:27:20.823 23:09:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:20.823 23:09:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:20.823 23:09:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.823 23:09:01 env -- common/autotest_common.sh@10 -- # set +x 00:27:20.823 ************************************ 00:27:20.823 START TEST env_pci 00:27:20.823 ************************************ 00:27:20.823 23:09:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:20.823 00:27:20.823 00:27:20.823 CUnit - A unit testing framework for C - Version 2.1-3 00:27:20.823 http://cunit.sourceforge.net/ 00:27:20.823 00:27:20.823 00:27:20.823 Suite: pci 00:27:20.823 Test: pci_hook ...[2024-12-09 23:09:01.196637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57011 has claimed it 00:27:20.823 passed 00:27:20.824 00:27:20.824 Run Summary: Type Total Ran Passed Failed Inactive 00:27:20.824 suites 1 1 n/a 0 0 00:27:20.824 tests 1 1 1 0 0 00:27:20.824 asserts 25 25 25 0 n/a 00:27:20.824 00:27:20.824 Elapsed time = 0.007 seconds 00:27:20.824 EAL: Cannot find device (10000:00:01.0) 00:27:20.824 EAL: Failed to attach device on primary process 00:27:20.824 00:27:20.824 real 0m0.063s 00:27:20.824 user 0m0.031s 00:27:20.824 sys 0m0.031s 00:27:20.824 23:09:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.824 ************************************ 00:27:20.824 END TEST env_pci 00:27:20.824 ************************************ 00:27:20.824 23:09:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:27:20.824 23:09:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:27:20.824 23:09:01 env -- env/env.sh@15 -- # uname 00:27:20.824 23:09:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:27:20.824 23:09:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:27:20.824 23:09:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:20.824 23:09:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:20.824 23:09:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.824 23:09:01 env -- common/autotest_common.sh@10 -- # set +x 00:27:20.824 ************************************ 00:27:20.824 START TEST env_dpdk_post_init 00:27:20.824 ************************************ 00:27:20.824 23:09:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:20.824 EAL: Detected CPU lcores: 10 00:27:20.824 EAL: Detected NUMA nodes: 1 00:27:20.824 EAL: Detected shared linkage of DPDK 00:27:20.824 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:20.824 EAL: Selected IOVA mode 'PA' 00:27:20.824 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:21.084 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:27:21.084 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:27:21.084 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:27:21.084 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:27:21.084 Starting DPDK initialization... 00:27:21.084 Starting SPDK post initialization... 00:27:21.084 SPDK NVMe probe 00:27:21.084 Attaching to 0000:00:10.0 00:27:21.084 Attaching to 0000:00:11.0 00:27:21.084 Attaching to 0000:00:12.0 00:27:21.084 Attaching to 0000:00:13.0 00:27:21.084 Attached to 0000:00:10.0 00:27:21.084 Attached to 0000:00:11.0 00:27:21.084 Attached to 0000:00:13.0 00:27:21.084 Attached to 0000:00:12.0 00:27:21.084 Cleaning up... 00:27:21.084 00:27:21.084 real 0m0.249s 00:27:21.084 user 0m0.085s 00:27:21.084 sys 0m0.064s 00:27:21.084 ************************************ 00:27:21.084 END TEST env_dpdk_post_init 00:27:21.084 ************************************ 00:27:21.084 23:09:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.084 23:09:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:27:21.084 23:09:01 env -- env/env.sh@26 -- # uname 00:27:21.084 23:09:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:27:21.084 23:09:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:21.084 23:09:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:21.084 23:09:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.084 23:09:01 env -- common/autotest_common.sh@10 -- # set +x 00:27:21.084 ************************************ 00:27:21.084 START TEST env_mem_callbacks 00:27:21.084 ************************************ 00:27:21.084 23:09:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:21.084 EAL: Detected CPU lcores: 10 00:27:21.084 EAL: Detected NUMA nodes: 1 00:27:21.084 EAL: Detected shared linkage of DPDK 00:27:21.084 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:21.084 EAL: Selected IOVA mode 'PA' 00:27:21.343 00:27:21.343 00:27:21.343 CUnit - A unit testing framework for C - Version 2.1-3 00:27:21.343 http://cunit.sourceforge.net/ 00:27:21.343 00:27:21.343 00:27:21.343 Suite: memory 00:27:21.343 Test: test ... 00:27:21.343 register 0x200000200000 2097152 00:27:21.343 malloc 3145728 00:27:21.343 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:21.343 register 0x200000400000 4194304 00:27:21.343 buf 0x2000004fffc0 len 3145728 PASSED 00:27:21.343 malloc 64 00:27:21.343 buf 0x2000004ffec0 len 64 PASSED 00:27:21.343 malloc 4194304 00:27:21.343 register 0x200000800000 6291456 00:27:21.343 buf 0x2000009fffc0 len 4194304 PASSED 00:27:21.343 free 0x2000004fffc0 3145728 00:27:21.343 free 0x2000004ffec0 64 00:27:21.343 unregister 0x200000400000 4194304 PASSED 00:27:21.343 free 0x2000009fffc0 4194304 00:27:21.343 unregister 0x200000800000 6291456 PASSED 00:27:21.343 malloc 8388608 00:27:21.343 register 0x200000400000 10485760 00:27:21.343 buf 0x2000005fffc0 len 8388608 PASSED 00:27:21.343 free 0x2000005fffc0 8388608 00:27:21.343 unregister 0x200000400000 10485760 PASSED 00:27:21.343 passed 00:27:21.343 00:27:21.343 Run Summary: Type Total Ran Passed Failed Inactive 00:27:21.343 suites 1 1 n/a 0 0 00:27:21.343 tests 1 1 1 0 0 00:27:21.343 asserts 15 15 15 0 n/a 00:27:21.343 00:27:21.343 Elapsed time = 0.046 seconds 00:27:21.343 00:27:21.343 real 0m0.216s 00:27:21.343 user 0m0.066s 00:27:21.343 sys 0m0.049s 00:27:21.343 23:09:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.343 23:09:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:27:21.343 ************************************ 00:27:21.343 END TEST env_mem_callbacks 00:27:21.343 ************************************ 00:27:21.343 ************************************ 00:27:21.343 END TEST env 00:27:21.343 ************************************ 00:27:21.343 00:27:21.343 real 0m6.530s 00:27:21.343 user 0m5.160s 00:27:21.343 sys 0m0.988s 00:27:21.343 23:09:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.343 23:09:01 env -- common/autotest_common.sh@10 -- # set +x 00:27:21.343 23:09:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:21.343 23:09:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:21.344 23:09:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.344 23:09:01 -- common/autotest_common.sh@10 -- # set +x 00:27:21.344 ************************************ 00:27:21.344 START TEST rpc 00:27:21.344 ************************************ 00:27:21.344 23:09:01 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:21.344 * Looking for test storage... 00:27:21.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:21.344 23:09:01 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:21.344 23:09:01 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:21.344 23:09:01 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:21.603 23:09:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:21.603 23:09:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.603 23:09:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.603 23:09:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.603 23:09:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.603 23:09:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.603 23:09:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.604 23:09:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.604 23:09:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.604 23:09:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:21.604 23:09:02 rpc -- scripts/common.sh@345 -- # : 1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.604 23:09:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.604 23:09:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@353 -- # local d=1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.604 23:09:02 rpc -- scripts/common.sh@355 -- # echo 1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.604 23:09:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@353 -- # local d=2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.604 23:09:02 rpc -- scripts/common.sh@355 -- # echo 2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.604 23:09:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.604 23:09:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.604 23:09:02 rpc -- scripts/common.sh@368 -- # return 0 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:21.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.604 --rc genhtml_branch_coverage=1 00:27:21.604 --rc genhtml_function_coverage=1 00:27:21.604 --rc genhtml_legend=1 00:27:21.604 --rc geninfo_all_blocks=1 00:27:21.604 --rc geninfo_unexecuted_blocks=1 00:27:21.604 00:27:21.604 ' 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:21.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.604 --rc genhtml_branch_coverage=1 00:27:21.604 --rc genhtml_function_coverage=1 00:27:21.604 --rc genhtml_legend=1 00:27:21.604 --rc geninfo_all_blocks=1 00:27:21.604 --rc geninfo_unexecuted_blocks=1 00:27:21.604 00:27:21.604 ' 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:21.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.604 --rc genhtml_branch_coverage=1 00:27:21.604 --rc genhtml_function_coverage=1 00:27:21.604 --rc genhtml_legend=1 00:27:21.604 --rc geninfo_all_blocks=1 00:27:21.604 --rc geninfo_unexecuted_blocks=1 00:27:21.604 00:27:21.604 ' 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:21.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.604 --rc genhtml_branch_coverage=1 00:27:21.604 --rc genhtml_function_coverage=1 00:27:21.604 --rc genhtml_legend=1 00:27:21.604 --rc geninfo_all_blocks=1 00:27:21.604 --rc geninfo_unexecuted_blocks=1 00:27:21.604 00:27:21.604 ' 00:27:21.604 23:09:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57138 00:27:21.604 23:09:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:21.604 23:09:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57138 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 57138 ']' 00:27:21.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.604 23:09:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.604 23:09:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:21.604 [2024-12-09 23:09:02.109266] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:21.604 [2024-12-09 23:09:02.109591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57138 ] 00:27:21.863 [2024-12-09 23:09:02.273104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.863 [2024-12-09 23:09:02.377197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:27:21.863 [2024-12-09 23:09:02.377400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57138' to capture a snapshot of events at runtime. 00:27:21.863 [2024-12-09 23:09:02.377587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.863 [2024-12-09 23:09:02.377600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.863 [2024-12-09 23:09:02.377608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57138 for offline analysis/debug. 00:27:21.863 [2024-12-09 23:09:02.378471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.434 23:09:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.434 23:09:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:27:22.434 23:09:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:22.434 23:09:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:22.434 23:09:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:27:22.434 23:09:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:27:22.434 23:09:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.434 23:09:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.434 23:09:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 ************************************ 00:27:22.434 START TEST rpc_integrity 00:27:22.434 ************************************ 00:27:22.434 23:09:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:27:22.434 23:09:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:22.434 23:09:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.434 23:09:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 23:09:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.434 23:09:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:22.434 23:09:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:22.434 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:22.434 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.434 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:27:22.434 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.434 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:22.434 { 00:27:22.434 "name": "Malloc0", 00:27:22.434 "aliases": [ 00:27:22.434 "fed905d7-ecd8-4eb6-9f24-510daa823901" 00:27:22.434 ], 00:27:22.434 "product_name": "Malloc disk", 00:27:22.434 "block_size": 512, 00:27:22.434 "num_blocks": 16384, 00:27:22.434 "uuid": "fed905d7-ecd8-4eb6-9f24-510daa823901", 00:27:22.434 "assigned_rate_limits": { 00:27:22.434 "rw_ios_per_sec": 0, 00:27:22.434 "rw_mbytes_per_sec": 0, 00:27:22.434 "r_mbytes_per_sec": 0, 00:27:22.434 "w_mbytes_per_sec": 0 00:27:22.434 }, 00:27:22.434 "claimed": false, 00:27:22.434 "zoned": false, 00:27:22.434 "supported_io_types": { 00:27:22.434 "read": true, 00:27:22.434 "write": true, 00:27:22.434 "unmap": true, 00:27:22.434 "flush": true, 00:27:22.434 "reset": true, 00:27:22.434 "nvme_admin": false, 00:27:22.435 "nvme_io": false, 00:27:22.435 "nvme_io_md": false, 00:27:22.435 "write_zeroes": true, 00:27:22.435 "zcopy": true, 00:27:22.435 "get_zone_info": false, 00:27:22.435 "zone_management": false, 00:27:22.435 "zone_append": false, 00:27:22.435 "compare": false, 00:27:22.435 "compare_and_write": false, 00:27:22.435 "abort": true, 00:27:22.435 "seek_hole": false, 00:27:22.435 "seek_data": false, 00:27:22.435 "copy": true, 00:27:22.435 "nvme_iov_md": false 00:27:22.435 }, 00:27:22.435 "memory_domains": [ 00:27:22.435 { 00:27:22.435 "dma_device_id": "system", 00:27:22.435 "dma_device_type": 1 00:27:22.435 }, 00:27:22.435 { 00:27:22.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.435 "dma_device_type": 2 00:27:22.435 } 00:27:22.435 ], 00:27:22.435 "driver_specific": {} 00:27:22.435 } 00:27:22.435 ]' 00:27:22.435 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 [2024-12-09 23:09:03.102683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:27:22.696 [2024-12-09 23:09:03.102749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.696 [2024-12-09 23:09:03.102776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:22.696 [2024-12-09 23:09:03.102788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.696 [2024-12-09 23:09:03.105030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.696 [2024-12-09 23:09:03.105179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:22.696 Passthru0 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:22.696 { 00:27:22.696 "name": "Malloc0", 00:27:22.696 "aliases": [ 00:27:22.696 "fed905d7-ecd8-4eb6-9f24-510daa823901" 00:27:22.696 ], 00:27:22.696 "product_name": "Malloc disk", 00:27:22.696 "block_size": 512, 00:27:22.696 "num_blocks": 16384, 00:27:22.696 "uuid": "fed905d7-ecd8-4eb6-9f24-510daa823901", 00:27:22.696 "assigned_rate_limits": { 00:27:22.696 "rw_ios_per_sec": 0, 00:27:22.696 "rw_mbytes_per_sec": 0, 00:27:22.696 "r_mbytes_per_sec": 0, 00:27:22.696 "w_mbytes_per_sec": 0 00:27:22.696 }, 00:27:22.696 "claimed": true, 00:27:22.696 "claim_type": "exclusive_write", 00:27:22.696 "zoned": false, 00:27:22.696 "supported_io_types": { 00:27:22.696 "read": true, 00:27:22.696 "write": true, 00:27:22.696 "unmap": true, 00:27:22.696 "flush": true, 00:27:22.696 "reset": true, 00:27:22.696 "nvme_admin": false, 00:27:22.696 "nvme_io": false, 00:27:22.696 "nvme_io_md": false, 00:27:22.696 "write_zeroes": true, 00:27:22.696 "zcopy": true, 00:27:22.696 "get_zone_info": false, 00:27:22.696 "zone_management": false, 00:27:22.696 "zone_append": false, 00:27:22.696 "compare": false, 00:27:22.696 "compare_and_write": false, 00:27:22.696 "abort": true, 00:27:22.696 "seek_hole": false, 00:27:22.696 "seek_data": false, 00:27:22.696 "copy": true, 00:27:22.696 "nvme_iov_md": false 00:27:22.696 }, 00:27:22.696 "memory_domains": [ 00:27:22.696 { 00:27:22.696 "dma_device_id": "system", 00:27:22.696 "dma_device_type": 1 00:27:22.696 }, 00:27:22.696 { 00:27:22.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.696 "dma_device_type": 2 00:27:22.696 } 00:27:22.696 ], 00:27:22.696 "driver_specific": {} 00:27:22.696 }, 00:27:22.696 { 00:27:22.696 "name": "Passthru0", 00:27:22.696 "aliases": [ 00:27:22.696 "510bcbbd-f580-5c29-a21e-ffb7dd8678f3" 00:27:22.696 ], 00:27:22.696 "product_name": "passthru", 00:27:22.696 "block_size": 512, 00:27:22.696 "num_blocks": 16384, 00:27:22.696 "uuid": "510bcbbd-f580-5c29-a21e-ffb7dd8678f3", 00:27:22.696 "assigned_rate_limits": { 00:27:22.696 "rw_ios_per_sec": 0, 00:27:22.696 "rw_mbytes_per_sec": 0, 00:27:22.696 "r_mbytes_per_sec": 0, 00:27:22.696 "w_mbytes_per_sec": 0 00:27:22.696 }, 00:27:22.696 "claimed": false, 00:27:22.696 "zoned": false, 00:27:22.696 "supported_io_types": { 00:27:22.696 "read": true, 00:27:22.696 "write": true, 00:27:22.696 "unmap": true, 00:27:22.696 "flush": true, 00:27:22.696 "reset": true, 00:27:22.696 "nvme_admin": false, 00:27:22.696 "nvme_io": false, 00:27:22.696 "nvme_io_md": false, 00:27:22.696 "write_zeroes": true, 00:27:22.696 "zcopy": true, 00:27:22.696 "get_zone_info": false, 00:27:22.696 "zone_management": false, 00:27:22.696 "zone_append": false, 00:27:22.696 "compare": false, 00:27:22.696 "compare_and_write": false, 00:27:22.696 "abort": true, 00:27:22.696 "seek_hole": false, 00:27:22.696 "seek_data": false, 00:27:22.696 "copy": true, 00:27:22.696 "nvme_iov_md": false 00:27:22.696 }, 00:27:22.696 "memory_domains": [ 00:27:22.696 { 00:27:22.696 "dma_device_id": "system", 00:27:22.696 "dma_device_type": 1 00:27:22.696 }, 00:27:22.696 { 00:27:22.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.696 "dma_device_type": 2 00:27:22.696 } 00:27:22.696 ], 00:27:22.696 "driver_specific": { 00:27:22.696 "passthru": { 00:27:22.696 "name": "Passthru0", 00:27:22.696 "base_bdev_name": "Malloc0" 00:27:22.696 } 00:27:22.696 } 00:27:22.696 } 00:27:22.696 ]' 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:22.696 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:22.697 ************************************ 00:27:22.697 END TEST rpc_integrity 00:27:22.697 ************************************ 00:27:22.697 23:09:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:22.697 00:27:22.697 real 0m0.245s 00:27:22.697 user 0m0.122s 00:27:22.697 sys 0m0.039s 00:27:22.697 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.697 23:09:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:22.697 23:09:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:27:22.697 23:09:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.697 23:09:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.697 23:09:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.697 ************************************ 00:27:22.697 START TEST rpc_plugins 00:27:22.697 ************************************ 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:27:22.697 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.697 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:27:22.697 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:22.697 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.697 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:27:22.697 { 00:27:22.697 "name": "Malloc1", 00:27:22.697 "aliases": [ 00:27:22.697 "2db9d6c6-502c-4da3-824f-c43562887ffc" 00:27:22.697 ], 00:27:22.697 "product_name": "Malloc disk", 00:27:22.697 "block_size": 4096, 00:27:22.697 "num_blocks": 256, 00:27:22.697 "uuid": "2db9d6c6-502c-4da3-824f-c43562887ffc", 00:27:22.697 "assigned_rate_limits": { 00:27:22.697 "rw_ios_per_sec": 0, 00:27:22.697 "rw_mbytes_per_sec": 0, 00:27:22.697 "r_mbytes_per_sec": 0, 00:27:22.697 "w_mbytes_per_sec": 0 00:27:22.697 }, 00:27:22.697 "claimed": false, 00:27:22.697 "zoned": false, 00:27:22.697 "supported_io_types": { 00:27:22.697 "read": true, 00:27:22.697 "write": true, 00:27:22.697 "unmap": true, 00:27:22.697 "flush": true, 00:27:22.697 "reset": true, 00:27:22.697 "nvme_admin": false, 00:27:22.697 "nvme_io": false, 00:27:22.697 "nvme_io_md": false, 00:27:22.697 "write_zeroes": true, 00:27:22.697 "zcopy": true, 00:27:22.697 "get_zone_info": false, 00:27:22.697 "zone_management": false, 00:27:22.697 "zone_append": false, 00:27:22.697 "compare": false, 00:27:22.697 "compare_and_write": false, 00:27:22.697 "abort": true, 00:27:22.697 "seek_hole": false, 00:27:22.697 "seek_data": false, 00:27:22.697 "copy": true, 00:27:22.697 "nvme_iov_md": false 00:27:22.697 }, 00:27:22.697 "memory_domains": [ 00:27:22.697 { 00:27:22.697 "dma_device_id": "system", 00:27:22.697 "dma_device_type": 1 00:27:22.697 }, 00:27:22.697 { 00:27:22.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.697 "dma_device_type": 2 00:27:22.697 } 00:27:22.697 ], 00:27:22.697 "driver_specific": {} 00:27:22.697 } 00:27:22.697 ]' 00:27:22.697 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:27:22.959 23:09:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:27:22.959 00:27:22.959 real 0m0.125s 00:27:22.959 user 0m0.064s 00:27:22.959 sys 0m0.019s 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.959 23:09:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:22.959 ************************************ 00:27:22.959 END TEST rpc_plugins 00:27:22.959 ************************************ 00:27:22.959 23:09:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:27:22.959 23:09:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.959 23:09:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.959 23:09:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.959 ************************************ 00:27:22.959 START TEST rpc_trace_cmd_test 00:27:22.959 ************************************ 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:27:22.959 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57138", 00:27:22.959 "tpoint_group_mask": "0x8", 00:27:22.959 "iscsi_conn": { 00:27:22.959 "mask": "0x2", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "scsi": { 00:27:22.959 "mask": "0x4", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "bdev": { 00:27:22.959 "mask": "0x8", 00:27:22.959 "tpoint_mask": "0xffffffffffffffff" 00:27:22.959 }, 00:27:22.959 "nvmf_rdma": { 00:27:22.959 "mask": "0x10", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "nvmf_tcp": { 00:27:22.959 "mask": "0x20", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "ftl": { 00:27:22.959 "mask": "0x40", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "blobfs": { 00:27:22.959 "mask": "0x80", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "dsa": { 00:27:22.959 "mask": "0x200", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "thread": { 00:27:22.959 "mask": "0x400", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "nvme_pcie": { 00:27:22.959 "mask": "0x800", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "iaa": { 00:27:22.959 "mask": "0x1000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "nvme_tcp": { 00:27:22.959 "mask": "0x2000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "bdev_nvme": { 00:27:22.959 "mask": "0x4000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "sock": { 00:27:22.959 "mask": "0x8000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "blob": { 00:27:22.959 "mask": "0x10000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "bdev_raid": { 00:27:22.959 "mask": "0x20000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 }, 00:27:22.959 "scheduler": { 00:27:22.959 "mask": "0x40000", 00:27:22.959 "tpoint_mask": "0x0" 00:27:22.959 } 00:27:22.959 }' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:27:22.959 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:27:23.221 ************************************ 00:27:23.221 END TEST rpc_trace_cmd_test 00:27:23.221 ************************************ 00:27:23.221 23:09:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:27:23.221 00:27:23.221 real 0m0.170s 00:27:23.221 user 0m0.143s 00:27:23.221 sys 0m0.019s 00:27:23.221 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.221 23:09:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.221 23:09:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:27:23.221 23:09:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:27:23.221 23:09:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:27:23.221 23:09:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.221 23:09:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.221 23:09:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:23.221 ************************************ 00:27:23.221 START TEST rpc_daemon_integrity 00:27:23.221 ************************************ 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:23.221 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:23.222 { 00:27:23.222 "name": "Malloc2", 00:27:23.222 "aliases": [ 00:27:23.222 "657263d6-71eb-4588-b01e-c7539bd68062" 00:27:23.222 ], 00:27:23.222 "product_name": "Malloc disk", 00:27:23.222 "block_size": 512, 00:27:23.222 "num_blocks": 16384, 00:27:23.222 "uuid": "657263d6-71eb-4588-b01e-c7539bd68062", 00:27:23.222 "assigned_rate_limits": { 00:27:23.222 "rw_ios_per_sec": 0, 00:27:23.222 "rw_mbytes_per_sec": 0, 00:27:23.222 "r_mbytes_per_sec": 0, 00:27:23.222 "w_mbytes_per_sec": 0 00:27:23.222 }, 00:27:23.222 "claimed": false, 00:27:23.222 "zoned": false, 00:27:23.222 "supported_io_types": { 00:27:23.222 "read": true, 00:27:23.222 "write": true, 00:27:23.222 "unmap": true, 00:27:23.222 "flush": true, 00:27:23.222 "reset": true, 00:27:23.222 "nvme_admin": false, 00:27:23.222 "nvme_io": false, 00:27:23.222 "nvme_io_md": false, 00:27:23.222 "write_zeroes": true, 00:27:23.222 "zcopy": true, 00:27:23.222 "get_zone_info": false, 00:27:23.222 "zone_management": false, 00:27:23.222 "zone_append": false, 00:27:23.222 "compare": false, 00:27:23.222 "compare_and_write": false, 00:27:23.222 "abort": true, 00:27:23.222 "seek_hole": false, 00:27:23.222 "seek_data": false, 00:27:23.222 "copy": true, 00:27:23.222 "nvme_iov_md": false 00:27:23.222 }, 00:27:23.222 "memory_domains": [ 00:27:23.222 { 00:27:23.222 "dma_device_id": "system", 00:27:23.222 "dma_device_type": 1 00:27:23.222 }, 00:27:23.222 { 00:27:23.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.222 "dma_device_type": 2 00:27:23.222 } 00:27:23.222 ], 00:27:23.222 "driver_specific": {} 00:27:23.222 } 00:27:23.222 ]' 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 [2024-12-09 23:09:03.758301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:27:23.222 [2024-12-09 23:09:03.758358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.222 [2024-12-09 23:09:03.758378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:23.222 [2024-12-09 23:09:03.758390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.222 [2024-12-09 23:09:03.760544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.222 [2024-12-09 23:09:03.760678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:23.222 Passthru0 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:23.222 { 00:27:23.222 "name": "Malloc2", 00:27:23.222 "aliases": [ 00:27:23.222 "657263d6-71eb-4588-b01e-c7539bd68062" 00:27:23.222 ], 00:27:23.222 "product_name": "Malloc disk", 00:27:23.222 "block_size": 512, 00:27:23.222 "num_blocks": 16384, 00:27:23.222 "uuid": "657263d6-71eb-4588-b01e-c7539bd68062", 00:27:23.222 "assigned_rate_limits": { 00:27:23.222 "rw_ios_per_sec": 0, 00:27:23.222 "rw_mbytes_per_sec": 0, 00:27:23.222 "r_mbytes_per_sec": 0, 00:27:23.222 "w_mbytes_per_sec": 0 00:27:23.222 }, 00:27:23.222 "claimed": true, 00:27:23.222 "claim_type": "exclusive_write", 00:27:23.222 "zoned": false, 00:27:23.222 "supported_io_types": { 00:27:23.222 "read": true, 00:27:23.222 "write": true, 00:27:23.222 "unmap": true, 00:27:23.222 "flush": true, 00:27:23.222 "reset": true, 00:27:23.222 "nvme_admin": false, 00:27:23.222 "nvme_io": false, 00:27:23.222 "nvme_io_md": false, 00:27:23.222 "write_zeroes": true, 00:27:23.222 "zcopy": true, 00:27:23.222 "get_zone_info": false, 00:27:23.222 "zone_management": false, 00:27:23.222 "zone_append": false, 00:27:23.222 "compare": false, 00:27:23.222 "compare_and_write": false, 00:27:23.222 "abort": true, 00:27:23.222 "seek_hole": false, 00:27:23.222 "seek_data": false, 00:27:23.222 "copy": true, 00:27:23.222 "nvme_iov_md": false 00:27:23.222 }, 00:27:23.222 "memory_domains": [ 00:27:23.222 { 00:27:23.222 "dma_device_id": "system", 00:27:23.222 "dma_device_type": 1 00:27:23.222 }, 00:27:23.222 { 00:27:23.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.222 "dma_device_type": 2 00:27:23.222 } 00:27:23.222 ], 00:27:23.222 "driver_specific": {} 00:27:23.222 }, 00:27:23.222 { 00:27:23.222 "name": "Passthru0", 00:27:23.222 "aliases": [ 00:27:23.222 "f0097b89-96f7-5a1e-b6f4-64a9d143f23a" 00:27:23.222 ], 00:27:23.222 "product_name": "passthru", 00:27:23.222 "block_size": 512, 00:27:23.222 "num_blocks": 16384, 00:27:23.222 "uuid": "f0097b89-96f7-5a1e-b6f4-64a9d143f23a", 00:27:23.222 "assigned_rate_limits": { 00:27:23.222 "rw_ios_per_sec": 0, 00:27:23.222 "rw_mbytes_per_sec": 0, 00:27:23.222 "r_mbytes_per_sec": 0, 00:27:23.222 "w_mbytes_per_sec": 0 00:27:23.222 }, 00:27:23.222 "claimed": false, 00:27:23.222 "zoned": false, 00:27:23.222 "supported_io_types": { 00:27:23.222 "read": true, 00:27:23.222 "write": true, 00:27:23.222 "unmap": true, 00:27:23.222 "flush": true, 00:27:23.222 "reset": true, 00:27:23.222 "nvme_admin": false, 00:27:23.222 "nvme_io": false, 00:27:23.222 "nvme_io_md": false, 00:27:23.222 "write_zeroes": true, 00:27:23.222 "zcopy": true, 00:27:23.222 "get_zone_info": false, 00:27:23.222 "zone_management": false, 00:27:23.222 "zone_append": false, 00:27:23.222 "compare": false, 00:27:23.222 "compare_and_write": false, 00:27:23.222 "abort": true, 00:27:23.222 "seek_hole": false, 00:27:23.222 "seek_data": false, 00:27:23.222 "copy": true, 00:27:23.222 "nvme_iov_md": false 00:27:23.222 }, 00:27:23.222 "memory_domains": [ 00:27:23.222 { 00:27:23.222 "dma_device_id": "system", 00:27:23.222 "dma_device_type": 1 00:27:23.222 }, 00:27:23.222 { 00:27:23.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.222 "dma_device_type": 2 00:27:23.222 } 00:27:23.222 ], 00:27:23.222 "driver_specific": { 00:27:23.222 "passthru": { 00:27:23.222 "name": "Passthru0", 00:27:23.222 "base_bdev_name": "Malloc2" 00:27:23.222 } 00:27:23.222 } 00:27:23.222 } 00:27:23.222 ]' 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.222 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:23.483 ************************************ 00:27:23.483 END TEST rpc_daemon_integrity 00:27:23.483 ************************************ 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:23.483 00:27:23.483 real 0m0.262s 00:27:23.483 user 0m0.129s 00:27:23.483 sys 0m0.033s 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.483 23:09:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:23.483 23:09:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:23.483 23:09:03 rpc -- rpc/rpc.sh@84 -- # killprocess 57138 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 57138 ']' 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@958 -- # kill -0 57138 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@959 -- # uname 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57138 00:27:23.483 killing process with pid 57138 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57138' 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@973 -- # kill 57138 00:27:23.483 23:09:03 rpc -- common/autotest_common.sh@978 -- # wait 57138 00:27:24.867 00:27:24.867 real 0m3.602s 00:27:24.867 user 0m4.009s 00:27:24.867 sys 0m0.640s 00:27:24.867 23:09:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.867 ************************************ 00:27:24.867 23:09:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.867 END TEST rpc 00:27:24.867 ************************************ 00:27:25.127 23:09:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:25.127 23:09:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:25.127 23:09:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.127 23:09:05 -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 ************************************ 00:27:25.127 START TEST skip_rpc 00:27:25.127 ************************************ 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:25.127 * Looking for test storage... 00:27:25.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.127 23:09:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.127 --rc genhtml_branch_coverage=1 00:27:25.127 --rc genhtml_function_coverage=1 00:27:25.127 --rc genhtml_legend=1 00:27:25.127 --rc geninfo_all_blocks=1 00:27:25.127 --rc geninfo_unexecuted_blocks=1 00:27:25.127 00:27:25.127 ' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.127 --rc genhtml_branch_coverage=1 00:27:25.127 --rc genhtml_function_coverage=1 00:27:25.127 --rc genhtml_legend=1 00:27:25.127 --rc geninfo_all_blocks=1 00:27:25.127 --rc geninfo_unexecuted_blocks=1 00:27:25.127 00:27:25.127 ' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.127 --rc genhtml_branch_coverage=1 00:27:25.127 --rc genhtml_function_coverage=1 00:27:25.127 --rc genhtml_legend=1 00:27:25.127 --rc geninfo_all_blocks=1 00:27:25.127 --rc geninfo_unexecuted_blocks=1 00:27:25.127 00:27:25.127 ' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.127 --rc genhtml_branch_coverage=1 00:27:25.127 --rc genhtml_function_coverage=1 00:27:25.127 --rc genhtml_legend=1 00:27:25.127 --rc geninfo_all_blocks=1 00:27:25.127 --rc geninfo_unexecuted_blocks=1 00:27:25.127 00:27:25.127 ' 00:27:25.127 23:09:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:25.127 23:09:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:25.127 23:09:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.127 23:09:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 ************************************ 00:27:25.127 START TEST skip_rpc 00:27:25.127 ************************************ 00:27:25.127 23:09:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:27:25.127 23:09:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57351 00:27:25.127 23:09:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:25.127 23:09:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:27:25.127 23:09:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:27:25.127 [2024-12-09 23:09:05.743455] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:25.127 [2024-12-09 23:09:05.743719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57351 ] 00:27:25.388 [2024-12-09 23:09:05.900138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.388 [2024-12-09 23:09:06.003879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57351 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57351 ']' 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57351 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57351 00:27:30.689 killing process with pid 57351 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57351' 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57351 00:27:30.689 23:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57351 00:27:31.635 ************************************ 00:27:31.635 END TEST skip_rpc 00:27:31.635 ************************************ 00:27:31.635 00:27:31.635 real 0m6.276s 00:27:31.635 user 0m5.905s 00:27:31.635 sys 0m0.265s 00:27:31.635 23:09:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.635 23:09:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:31.635 23:09:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:27:31.635 23:09:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.635 23:09:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.635 23:09:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:31.635 ************************************ 00:27:31.635 START TEST skip_rpc_with_json 00:27:31.635 ************************************ 00:27:31.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57444 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57444 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57444 ']' 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:31.635 23:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:31.635 [2024-12-09 23:09:12.051709] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:31.635 [2024-12-09 23:09:12.051849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57444 ] 00:27:31.635 [2024-12-09 23:09:12.207242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.896 [2024-12-09 23:09:12.291009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:32.488 [2024-12-09 23:09:12.844743] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:27:32.488 request: 00:27:32.488 { 00:27:32.488 "trtype": "tcp", 00:27:32.488 "method": "nvmf_get_transports", 00:27:32.488 "req_id": 1 00:27:32.488 } 00:27:32.488 Got JSON-RPC error response 00:27:32.488 response: 00:27:32.488 { 00:27:32.488 "code": -19, 00:27:32.488 "message": "No such device" 00:27:32.488 } 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:32.488 [2024-12-09 23:09:12.856836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.488 23:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:32.488 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.488 23:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:32.488 { 00:27:32.488 "subsystems": [ 00:27:32.488 { 00:27:32.488 "subsystem": "fsdev", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "fsdev_set_opts", 00:27:32.488 "params": { 00:27:32.488 "fsdev_io_pool_size": 65535, 00:27:32.488 "fsdev_io_cache_size": 256 00:27:32.488 } 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "keyring", 00:27:32.488 "config": [] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "iobuf", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "iobuf_set_options", 00:27:32.488 "params": { 00:27:32.488 "small_pool_count": 8192, 00:27:32.488 "large_pool_count": 1024, 00:27:32.488 "small_bufsize": 8192, 00:27:32.488 "large_bufsize": 135168, 00:27:32.488 "enable_numa": false 00:27:32.488 } 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "sock", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "sock_set_default_impl", 00:27:32.488 "params": { 00:27:32.488 "impl_name": "posix" 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "sock_impl_set_options", 00:27:32.488 "params": { 00:27:32.488 "impl_name": "ssl", 00:27:32.488 "recv_buf_size": 4096, 00:27:32.488 "send_buf_size": 4096, 00:27:32.488 "enable_recv_pipe": true, 00:27:32.488 "enable_quickack": false, 00:27:32.488 "enable_placement_id": 0, 00:27:32.488 "enable_zerocopy_send_server": true, 00:27:32.488 "enable_zerocopy_send_client": false, 00:27:32.488 "zerocopy_threshold": 0, 00:27:32.488 "tls_version": 0, 00:27:32.488 "enable_ktls": false 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "sock_impl_set_options", 00:27:32.488 "params": { 00:27:32.488 "impl_name": "posix", 00:27:32.488 "recv_buf_size": 2097152, 00:27:32.488 "send_buf_size": 2097152, 00:27:32.488 "enable_recv_pipe": true, 00:27:32.488 "enable_quickack": false, 00:27:32.488 "enable_placement_id": 0, 00:27:32.488 "enable_zerocopy_send_server": true, 00:27:32.488 "enable_zerocopy_send_client": false, 00:27:32.488 "zerocopy_threshold": 0, 00:27:32.488 "tls_version": 0, 00:27:32.488 "enable_ktls": false 00:27:32.488 } 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "vmd", 00:27:32.488 "config": [] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "accel", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "accel_set_options", 00:27:32.488 "params": { 00:27:32.488 "small_cache_size": 128, 00:27:32.488 "large_cache_size": 16, 00:27:32.488 "task_count": 2048, 00:27:32.488 "sequence_count": 2048, 00:27:32.488 "buf_count": 2048 00:27:32.488 } 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "bdev", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "bdev_set_options", 00:27:32.488 "params": { 00:27:32.488 "bdev_io_pool_size": 65535, 00:27:32.488 "bdev_io_cache_size": 256, 00:27:32.488 "bdev_auto_examine": true, 00:27:32.488 "iobuf_small_cache_size": 128, 00:27:32.488 "iobuf_large_cache_size": 16 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "bdev_raid_set_options", 00:27:32.488 "params": { 00:27:32.488 "process_window_size_kb": 1024, 00:27:32.488 "process_max_bandwidth_mb_sec": 0 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "bdev_iscsi_set_options", 00:27:32.488 "params": { 00:27:32.488 "timeout_sec": 30 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "bdev_nvme_set_options", 00:27:32.488 "params": { 00:27:32.488 "action_on_timeout": "none", 00:27:32.488 "timeout_us": 0, 00:27:32.488 "timeout_admin_us": 0, 00:27:32.488 "keep_alive_timeout_ms": 10000, 00:27:32.488 "arbitration_burst": 0, 00:27:32.488 "low_priority_weight": 0, 00:27:32.488 "medium_priority_weight": 0, 00:27:32.488 "high_priority_weight": 0, 00:27:32.488 "nvme_adminq_poll_period_us": 10000, 00:27:32.488 "nvme_ioq_poll_period_us": 0, 00:27:32.488 "io_queue_requests": 0, 00:27:32.488 "delay_cmd_submit": true, 00:27:32.488 "transport_retry_count": 4, 00:27:32.488 "bdev_retry_count": 3, 00:27:32.488 "transport_ack_timeout": 0, 00:27:32.488 "ctrlr_loss_timeout_sec": 0, 00:27:32.488 "reconnect_delay_sec": 0, 00:27:32.488 "fast_io_fail_timeout_sec": 0, 00:27:32.488 "disable_auto_failback": false, 00:27:32.488 "generate_uuids": false, 00:27:32.488 "transport_tos": 0, 00:27:32.488 "nvme_error_stat": false, 00:27:32.488 "rdma_srq_size": 0, 00:27:32.488 "io_path_stat": false, 00:27:32.488 "allow_accel_sequence": false, 00:27:32.488 "rdma_max_cq_size": 0, 00:27:32.488 "rdma_cm_event_timeout_ms": 0, 00:27:32.488 "dhchap_digests": [ 00:27:32.488 "sha256", 00:27:32.488 "sha384", 00:27:32.488 "sha512" 00:27:32.488 ], 00:27:32.488 "dhchap_dhgroups": [ 00:27:32.488 "null", 00:27:32.488 "ffdhe2048", 00:27:32.488 "ffdhe3072", 00:27:32.488 "ffdhe4096", 00:27:32.488 "ffdhe6144", 00:27:32.488 "ffdhe8192" 00:27:32.488 ] 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "bdev_nvme_set_hotplug", 00:27:32.488 "params": { 00:27:32.488 "period_us": 100000, 00:27:32.488 "enable": false 00:27:32.488 } 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "method": "bdev_wait_for_examine" 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "scsi", 00:27:32.488 "config": null 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "scheduler", 00:27:32.488 "config": [ 00:27:32.488 { 00:27:32.488 "method": "framework_set_scheduler", 00:27:32.488 "params": { 00:27:32.488 "name": "static" 00:27:32.488 } 00:27:32.488 } 00:27:32.488 ] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "vhost_scsi", 00:27:32.488 "config": [] 00:27:32.488 }, 00:27:32.488 { 00:27:32.488 "subsystem": "vhost_blk", 00:27:32.488 "config": [] 00:27:32.488 }, 00:27:32.488 { 00:27:32.489 "subsystem": "ublk", 00:27:32.489 "config": [] 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "subsystem": "nbd", 00:27:32.489 "config": [] 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "subsystem": "nvmf", 00:27:32.489 "config": [ 00:27:32.489 { 00:27:32.489 "method": "nvmf_set_config", 00:27:32.489 "params": { 00:27:32.489 "discovery_filter": "match_any", 00:27:32.489 "admin_cmd_passthru": { 00:27:32.489 "identify_ctrlr": false 00:27:32.489 }, 00:27:32.489 "dhchap_digests": [ 00:27:32.489 "sha256", 00:27:32.489 "sha384", 00:27:32.489 "sha512" 00:27:32.489 ], 00:27:32.489 "dhchap_dhgroups": [ 00:27:32.489 "null", 00:27:32.489 "ffdhe2048", 00:27:32.489 "ffdhe3072", 00:27:32.489 "ffdhe4096", 00:27:32.489 "ffdhe6144", 00:27:32.489 "ffdhe8192" 00:27:32.489 ] 00:27:32.489 } 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "method": "nvmf_set_max_subsystems", 00:27:32.489 "params": { 00:27:32.489 "max_subsystems": 1024 00:27:32.489 } 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "method": "nvmf_set_crdt", 00:27:32.489 "params": { 00:27:32.489 "crdt1": 0, 00:27:32.489 "crdt2": 0, 00:27:32.489 "crdt3": 0 00:27:32.489 } 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "method": "nvmf_create_transport", 00:27:32.489 "params": { 00:27:32.489 "trtype": "TCP", 00:27:32.489 "max_queue_depth": 128, 00:27:32.489 "max_io_qpairs_per_ctrlr": 127, 00:27:32.489 "in_capsule_data_size": 4096, 00:27:32.489 "max_io_size": 131072, 00:27:32.489 "io_unit_size": 131072, 00:27:32.489 "max_aq_depth": 128, 00:27:32.489 "num_shared_buffers": 511, 00:27:32.489 "buf_cache_size": 4294967295, 00:27:32.489 "dif_insert_or_strip": false, 00:27:32.489 "zcopy": false, 00:27:32.489 "c2h_success": true, 00:27:32.489 "sock_priority": 0, 00:27:32.489 "abort_timeout_sec": 1, 00:27:32.489 "ack_timeout": 0, 00:27:32.489 "data_wr_pool_size": 0 00:27:32.489 } 00:27:32.489 } 00:27:32.489 ] 00:27:32.489 }, 00:27:32.489 { 00:27:32.489 "subsystem": "iscsi", 00:27:32.489 "config": [ 00:27:32.489 { 00:27:32.489 "method": "iscsi_set_options", 00:27:32.489 "params": { 00:27:32.489 "node_base": "iqn.2016-06.io.spdk", 00:27:32.489 "max_sessions": 128, 00:27:32.489 "max_connections_per_session": 2, 00:27:32.489 "max_queue_depth": 64, 00:27:32.489 "default_time2wait": 2, 00:27:32.489 "default_time2retain": 20, 00:27:32.489 "first_burst_length": 8192, 00:27:32.489 "immediate_data": true, 00:27:32.489 "allow_duplicated_isid": false, 00:27:32.489 "error_recovery_level": 0, 00:27:32.489 "nop_timeout": 60, 00:27:32.489 "nop_in_interval": 30, 00:27:32.489 "disable_chap": false, 00:27:32.489 "require_chap": false, 00:27:32.489 "mutual_chap": false, 00:27:32.489 "chap_group": 0, 00:27:32.489 "max_large_datain_per_connection": 64, 00:27:32.489 "max_r2t_per_connection": 4, 00:27:32.489 "pdu_pool_size": 36864, 00:27:32.489 "immediate_data_pool_size": 16384, 00:27:32.489 "data_out_pool_size": 2048 00:27:32.489 } 00:27:32.489 } 00:27:32.489 ] 00:27:32.489 } 00:27:32.489 ] 00:27:32.489 } 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57444 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57444 ']' 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57444 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57444 00:27:32.489 killing process with pid 57444 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57444' 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57444 00:27:32.489 23:09:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57444 00:27:33.898 23:09:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57483 00:27:33.898 23:09:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:33.898 23:09:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57483 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57483 ']' 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57483 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57483 00:27:39.188 killing process with pid 57483 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57483' 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57483 00:27:39.188 23:09:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57483 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:40.130 00:27:40.130 real 0m8.493s 00:27:40.130 user 0m8.092s 00:27:40.130 sys 0m0.586s 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.130 ************************************ 00:27:40.130 END TEST skip_rpc_with_json 00:27:40.130 ************************************ 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:40.130 23:09:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:40.130 ************************************ 00:27:40.130 START TEST skip_rpc_with_delay 00:27:40.130 ************************************ 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:40.130 [2024-12-09 23:09:20.583929] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:40.130 ************************************ 00:27:40.130 END TEST skip_rpc_with_delay 00:27:40.130 ************************************ 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:40.130 00:27:40.130 real 0m0.121s 00:27:40.130 user 0m0.061s 00:27:40.130 sys 0m0.059s 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.130 23:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:27:40.130 23:09:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:27:40.130 23:09:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:27:40.130 23:09:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.130 23:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:40.130 ************************************ 00:27:40.130 START TEST exit_on_failed_rpc_init 00:27:40.130 ************************************ 00:27:40.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57600 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57600 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57600 ']' 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.130 23:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.130 [2024-12-09 23:09:20.745482] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:40.130 [2024-12-09 23:09:20.745604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57600 ] 00:27:40.392 [2024-12-09 23:09:20.907678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.392 [2024-12-09 23:09:21.008608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:27:41.333 23:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:41.333 [2024-12-09 23:09:21.722519] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:41.333 [2024-12-09 23:09:21.722797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57618 ] 00:27:41.333 [2024-12-09 23:09:21.873601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.591 [2024-12-09 23:09:21.975958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.591 [2024-12-09 23:09:21.976087] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:41.591 [2024-12-09 23:09:21.976101] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:41.591 [2024-12-09 23:09:21.976114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57600 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57600 ']' 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57600 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57600 00:27:41.591 killing process with pid 57600 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57600' 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57600 00:27:41.591 23:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57600 00:27:43.501 00:27:43.501 real 0m3.034s 00:27:43.501 user 0m3.358s 00:27:43.501 sys 0m0.393s 00:27:43.501 23:09:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.501 ************************************ 00:27:43.501 END TEST exit_on_failed_rpc_init 00:27:43.501 ************************************ 00:27:43.501 23:09:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 23:09:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:43.501 00:27:43.501 real 0m18.221s 00:27:43.501 user 0m17.546s 00:27:43.501 sys 0m1.474s 00:27:43.501 ************************************ 00:27:43.501 END TEST skip_rpc 00:27:43.501 ************************************ 00:27:43.501 23:09:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.501 23:09:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 23:09:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:43.501 23:09:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.501 23:09:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.501 23:09:23 -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 ************************************ 00:27:43.501 START TEST rpc_client 00:27:43.501 ************************************ 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:43.501 * Looking for test storage... 00:27:43.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.501 23:09:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.501 --rc genhtml_branch_coverage=1 00:27:43.501 --rc genhtml_function_coverage=1 00:27:43.501 --rc genhtml_legend=1 00:27:43.501 --rc geninfo_all_blocks=1 00:27:43.501 --rc geninfo_unexecuted_blocks=1 00:27:43.501 00:27:43.501 ' 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.501 --rc genhtml_branch_coverage=1 00:27:43.501 --rc genhtml_function_coverage=1 00:27:43.501 --rc genhtml_legend=1 00:27:43.501 --rc geninfo_all_blocks=1 00:27:43.501 --rc geninfo_unexecuted_blocks=1 00:27:43.501 00:27:43.501 ' 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.501 --rc genhtml_branch_coverage=1 00:27:43.501 --rc genhtml_function_coverage=1 00:27:43.501 --rc genhtml_legend=1 00:27:43.501 --rc geninfo_all_blocks=1 00:27:43.501 --rc geninfo_unexecuted_blocks=1 00:27:43.501 00:27:43.501 ' 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.501 --rc genhtml_branch_coverage=1 00:27:43.501 --rc genhtml_function_coverage=1 00:27:43.501 --rc genhtml_legend=1 00:27:43.501 --rc geninfo_all_blocks=1 00:27:43.501 --rc geninfo_unexecuted_blocks=1 00:27:43.501 00:27:43.501 ' 00:27:43.501 23:09:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:27:43.501 OK 00:27:43.501 23:09:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:27:43.501 00:27:43.501 real 0m0.183s 00:27:43.501 user 0m0.111s 00:27:43.501 sys 0m0.080s 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.501 23:09:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 ************************************ 00:27:43.501 END TEST rpc_client 00:27:43.501 ************************************ 00:27:43.501 23:09:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:43.501 23:09:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.501 23:09:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.501 23:09:23 -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 ************************************ 00:27:43.501 START TEST json_config 00:27:43.501 ************************************ 00:27:43.501 23:09:23 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:43.501 23:09:24 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:43.501 23:09:24 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:27:43.501 23:09:24 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:43.501 23:09:24 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:43.501 23:09:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.501 23:09:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.501 23:09:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.502 23:09:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.502 23:09:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.502 23:09:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.502 23:09:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.502 23:09:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:27:43.502 23:09:24 json_config -- scripts/common.sh@345 -- # : 1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.502 23:09:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.502 23:09:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@353 -- # local d=1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.502 23:09:24 json_config -- scripts/common.sh@355 -- # echo 1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.502 23:09:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@353 -- # local d=2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.502 23:09:24 json_config -- scripts/common.sh@355 -- # echo 2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.502 23:09:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.502 23:09:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.502 23:09:24 json_config -- scripts/common.sh@368 -- # return 0 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:43.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.502 --rc genhtml_branch_coverage=1 00:27:43.502 --rc genhtml_function_coverage=1 00:27:43.502 --rc genhtml_legend=1 00:27:43.502 --rc geninfo_all_blocks=1 00:27:43.502 --rc geninfo_unexecuted_blocks=1 00:27:43.502 00:27:43.502 ' 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:43.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.502 --rc genhtml_branch_coverage=1 00:27:43.502 --rc genhtml_function_coverage=1 00:27:43.502 --rc genhtml_legend=1 00:27:43.502 --rc geninfo_all_blocks=1 00:27:43.502 --rc geninfo_unexecuted_blocks=1 00:27:43.502 00:27:43.502 ' 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:43.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.502 --rc genhtml_branch_coverage=1 00:27:43.502 --rc genhtml_function_coverage=1 00:27:43.502 --rc genhtml_legend=1 00:27:43.502 --rc geninfo_all_blocks=1 00:27:43.502 --rc geninfo_unexecuted_blocks=1 00:27:43.502 00:27:43.502 ' 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:43.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.502 --rc genhtml_branch_coverage=1 00:27:43.502 --rc genhtml_function_coverage=1 00:27:43.502 --rc genhtml_legend=1 00:27:43.502 --rc geninfo_all_blocks=1 00:27:43.502 --rc geninfo_unexecuted_blocks=1 00:27:43.502 00:27:43.502 ' 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee621cbe-db37-404e-aebf-629496038471 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ee621cbe-db37-404e-aebf-629496038471 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:43.502 23:09:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.502 23:09:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.502 23:09:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.502 23:09:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.502 23:09:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.502 23:09:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.502 23:09:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.502 23:09:24 json_config -- paths/export.sh@5 -- # export PATH 00:27:43.502 23:09:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@51 -- # : 0 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.502 23:09:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:27:43.502 WARNING: No tests are enabled so not running JSON configuration tests 00:27:43.502 23:09:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:27:43.502 00:27:43.502 real 0m0.127s 00:27:43.502 user 0m0.082s 00:27:43.502 sys 0m0.048s 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.502 23:09:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:27:43.502 ************************************ 00:27:43.502 END TEST json_config 00:27:43.502 ************************************ 00:27:43.765 23:09:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:43.765 23:09:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.765 23:09:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.765 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:27:43.765 ************************************ 00:27:43.765 START TEST json_config_extra_key 00:27:43.765 ************************************ 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:43.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.765 --rc genhtml_branch_coverage=1 00:27:43.765 --rc genhtml_function_coverage=1 00:27:43.765 --rc genhtml_legend=1 00:27:43.765 --rc geninfo_all_blocks=1 00:27:43.765 --rc geninfo_unexecuted_blocks=1 00:27:43.765 00:27:43.765 ' 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:43.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.765 --rc genhtml_branch_coverage=1 00:27:43.765 --rc genhtml_function_coverage=1 00:27:43.765 --rc genhtml_legend=1 00:27:43.765 --rc geninfo_all_blocks=1 00:27:43.765 --rc geninfo_unexecuted_blocks=1 00:27:43.765 00:27:43.765 ' 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:43.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.765 --rc genhtml_branch_coverage=1 00:27:43.765 --rc genhtml_function_coverage=1 00:27:43.765 --rc genhtml_legend=1 00:27:43.765 --rc geninfo_all_blocks=1 00:27:43.765 --rc geninfo_unexecuted_blocks=1 00:27:43.765 00:27:43.765 ' 00:27:43.765 23:09:24 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:43.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.765 --rc genhtml_branch_coverage=1 00:27:43.765 --rc genhtml_function_coverage=1 00:27:43.765 --rc genhtml_legend=1 00:27:43.765 --rc geninfo_all_blocks=1 00:27:43.765 --rc geninfo_unexecuted_blocks=1 00:27:43.765 00:27:43.765 ' 00:27:43.765 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee621cbe-db37-404e-aebf-629496038471 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ee621cbe-db37-404e-aebf-629496038471 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.765 23:09:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.765 23:09:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.765 23:09:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.765 23:09:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.765 23:09:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:27:43.765 23:09:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.765 23:09:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.766 23:09:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.766 23:09:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.766 23:09:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.766 23:09:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:27:43.766 INFO: launching applications... 00:27:43.766 23:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57812 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:27:43.766 Waiting for target to run... 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57812 /var/tmp/spdk_tgt.sock 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57812 ']' 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:27:43.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.766 23:09:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:27:43.766 23:09:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:43.766 [2024-12-09 23:09:24.363798] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:43.766 [2024-12-09 23:09:24.364069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57812 ] 00:27:44.339 [2024-12-09 23:09:24.692848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.339 [2024-12-09 23:09:24.791620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.914 23:09:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.914 00:27:44.914 INFO: shutting down applications... 00:27:44.914 23:09:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:27:44.914 23:09:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:27:44.914 23:09:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57812 ]] 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57812 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:27:44.914 23:09:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:45.175 23:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:45.175 23:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:45.175 23:09:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:27:45.175 23:09:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:45.745 23:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:45.745 23:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:45.745 23:09:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:27:45.745 23:09:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:46.316 23:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:46.316 23:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:46.316 23:09:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:27:46.316 23:09:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:46.888 SPDK target shutdown done 00:27:46.888 Success 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:27:46.888 23:09:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:27:46.888 23:09:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:27:46.888 ************************************ 00:27:46.888 END TEST json_config_extra_key 00:27:46.888 ************************************ 00:27:46.888 00:27:46.888 real 0m3.152s 00:27:46.888 user 0m2.784s 00:27:46.888 sys 0m0.389s 00:27:46.888 23:09:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.888 23:09:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:27:46.888 23:09:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:46.888 23:09:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:46.888 23:09:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.888 23:09:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.888 ************************************ 00:27:46.888 START TEST alias_rpc 00:27:46.888 ************************************ 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:46.888 * Looking for test storage... 00:27:46.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:27:46.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.888 23:09:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.888 23:09:27 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:46.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.888 --rc genhtml_branch_coverage=1 00:27:46.888 --rc genhtml_function_coverage=1 00:27:46.888 --rc genhtml_legend=1 00:27:46.889 --rc geninfo_all_blocks=1 00:27:46.889 --rc geninfo_unexecuted_blocks=1 00:27:46.889 00:27:46.889 ' 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:46.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.889 --rc genhtml_branch_coverage=1 00:27:46.889 --rc genhtml_function_coverage=1 00:27:46.889 --rc genhtml_legend=1 00:27:46.889 --rc geninfo_all_blocks=1 00:27:46.889 --rc geninfo_unexecuted_blocks=1 00:27:46.889 00:27:46.889 ' 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:46.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.889 --rc genhtml_branch_coverage=1 00:27:46.889 --rc genhtml_function_coverage=1 00:27:46.889 --rc genhtml_legend=1 00:27:46.889 --rc geninfo_all_blocks=1 00:27:46.889 --rc geninfo_unexecuted_blocks=1 00:27:46.889 00:27:46.889 ' 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:46.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.889 --rc genhtml_branch_coverage=1 00:27:46.889 --rc genhtml_function_coverage=1 00:27:46.889 --rc genhtml_legend=1 00:27:46.889 --rc geninfo_all_blocks=1 00:27:46.889 --rc geninfo_unexecuted_blocks=1 00:27:46.889 00:27:46.889 ' 00:27:46.889 23:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:27:46.889 23:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57905 00:27:46.889 23:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57905 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57905 ']' 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.889 23:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.889 23:09:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.150 [2024-12-09 23:09:27.567426] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:47.150 [2024-12-09 23:09:27.568245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57905 ] 00:27:47.150 [2024-12-09 23:09:27.726242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.409 [2024-12-09 23:09:27.829332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.980 23:09:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.980 23:09:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:47.980 23:09:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:27:48.239 23:09:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57905 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57905 ']' 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57905 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57905 00:27:48.239 killing process with pid 57905 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57905' 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 57905 00:27:48.239 23:09:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 57905 00:27:50.201 ************************************ 00:27:50.201 END TEST alias_rpc 00:27:50.201 ************************************ 00:27:50.201 00:27:50.201 real 0m2.988s 00:27:50.201 user 0m3.106s 00:27:50.201 sys 0m0.433s 00:27:50.201 23:09:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.201 23:09:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:50.201 23:09:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:27:50.201 23:09:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:50.201 23:09:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:50.201 23:09:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.201 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:27:50.201 ************************************ 00:27:50.201 START TEST spdkcli_tcp 00:27:50.201 ************************************ 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:50.201 * Looking for test storage... 00:27:50.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.201 23:09:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.201 --rc genhtml_branch_coverage=1 00:27:50.201 --rc genhtml_function_coverage=1 00:27:50.201 --rc genhtml_legend=1 00:27:50.201 --rc geninfo_all_blocks=1 00:27:50.201 --rc geninfo_unexecuted_blocks=1 00:27:50.201 00:27:50.201 ' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.201 --rc genhtml_branch_coverage=1 00:27:50.201 --rc genhtml_function_coverage=1 00:27:50.201 --rc genhtml_legend=1 00:27:50.201 --rc geninfo_all_blocks=1 00:27:50.201 --rc geninfo_unexecuted_blocks=1 00:27:50.201 00:27:50.201 ' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.201 --rc genhtml_branch_coverage=1 00:27:50.201 --rc genhtml_function_coverage=1 00:27:50.201 --rc genhtml_legend=1 00:27:50.201 --rc geninfo_all_blocks=1 00:27:50.201 --rc geninfo_unexecuted_blocks=1 00:27:50.201 00:27:50.201 ' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.201 --rc genhtml_branch_coverage=1 00:27:50.201 --rc genhtml_function_coverage=1 00:27:50.201 --rc genhtml_legend=1 00:27:50.201 --rc geninfo_all_blocks=1 00:27:50.201 --rc geninfo_unexecuted_blocks=1 00:27:50.201 00:27:50.201 ' 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58001 00:27:50.201 23:09:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58001 00:27:50.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58001 ']' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.201 23:09:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.201 [2024-12-09 23:09:30.603271] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:50.201 [2024-12-09 23:09:30.603391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58001 ] 00:27:50.201 [2024-12-09 23:09:30.761424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:50.462 [2024-12-09 23:09:30.864644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.462 [2024-12-09 23:09:30.864893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.034 23:09:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.034 23:09:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:27:51.034 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58018 00:27:51.034 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:27:51.034 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:27:51.034 [ 00:27:51.034 "bdev_malloc_delete", 00:27:51.034 "bdev_malloc_create", 00:27:51.034 "bdev_null_resize", 00:27:51.034 "bdev_null_delete", 00:27:51.034 "bdev_null_create", 00:27:51.034 "bdev_nvme_cuse_unregister", 00:27:51.034 "bdev_nvme_cuse_register", 00:27:51.034 "bdev_opal_new_user", 00:27:51.034 "bdev_opal_set_lock_state", 00:27:51.034 "bdev_opal_delete", 00:27:51.034 "bdev_opal_get_info", 00:27:51.034 "bdev_opal_create", 00:27:51.034 "bdev_nvme_opal_revert", 00:27:51.034 "bdev_nvme_opal_init", 00:27:51.034 "bdev_nvme_send_cmd", 00:27:51.034 "bdev_nvme_set_keys", 00:27:51.034 "bdev_nvme_get_path_iostat", 00:27:51.034 "bdev_nvme_get_mdns_discovery_info", 00:27:51.034 "bdev_nvme_stop_mdns_discovery", 00:27:51.034 "bdev_nvme_start_mdns_discovery", 00:27:51.034 "bdev_nvme_set_multipath_policy", 00:27:51.034 "bdev_nvme_set_preferred_path", 00:27:51.034 "bdev_nvme_get_io_paths", 00:27:51.034 "bdev_nvme_remove_error_injection", 00:27:51.034 "bdev_nvme_add_error_injection", 00:27:51.034 "bdev_nvme_get_discovery_info", 00:27:51.034 "bdev_nvme_stop_discovery", 00:27:51.034 "bdev_nvme_start_discovery", 00:27:51.035 "bdev_nvme_get_controller_health_info", 00:27:51.035 "bdev_nvme_disable_controller", 00:27:51.035 "bdev_nvme_enable_controller", 00:27:51.035 "bdev_nvme_reset_controller", 00:27:51.035 "bdev_nvme_get_transport_statistics", 00:27:51.035 "bdev_nvme_apply_firmware", 00:27:51.035 "bdev_nvme_detach_controller", 00:27:51.035 "bdev_nvme_get_controllers", 00:27:51.035 "bdev_nvme_attach_controller", 00:27:51.035 "bdev_nvme_set_hotplug", 00:27:51.035 "bdev_nvme_set_options", 00:27:51.035 "bdev_passthru_delete", 00:27:51.035 "bdev_passthru_create", 00:27:51.035 "bdev_lvol_set_parent_bdev", 00:27:51.035 "bdev_lvol_set_parent", 00:27:51.035 "bdev_lvol_check_shallow_copy", 00:27:51.035 "bdev_lvol_start_shallow_copy", 00:27:51.035 "bdev_lvol_grow_lvstore", 00:27:51.035 "bdev_lvol_get_lvols", 00:27:51.035 "bdev_lvol_get_lvstores", 00:27:51.035 "bdev_lvol_delete", 00:27:51.035 "bdev_lvol_set_read_only", 00:27:51.035 "bdev_lvol_resize", 00:27:51.035 "bdev_lvol_decouple_parent", 00:27:51.035 "bdev_lvol_inflate", 00:27:51.035 "bdev_lvol_rename", 00:27:51.035 "bdev_lvol_clone_bdev", 00:27:51.035 "bdev_lvol_clone", 00:27:51.035 "bdev_lvol_snapshot", 00:27:51.035 "bdev_lvol_create", 00:27:51.035 "bdev_lvol_delete_lvstore", 00:27:51.035 "bdev_lvol_rename_lvstore", 00:27:51.035 "bdev_lvol_create_lvstore", 00:27:51.035 "bdev_raid_set_options", 00:27:51.035 "bdev_raid_remove_base_bdev", 00:27:51.035 "bdev_raid_add_base_bdev", 00:27:51.035 "bdev_raid_delete", 00:27:51.035 "bdev_raid_create", 00:27:51.035 "bdev_raid_get_bdevs", 00:27:51.035 "bdev_error_inject_error", 00:27:51.035 "bdev_error_delete", 00:27:51.035 "bdev_error_create", 00:27:51.035 "bdev_split_delete", 00:27:51.035 "bdev_split_create", 00:27:51.035 "bdev_delay_delete", 00:27:51.035 "bdev_delay_create", 00:27:51.035 "bdev_delay_update_latency", 00:27:51.035 "bdev_zone_block_delete", 00:27:51.035 "bdev_zone_block_create", 00:27:51.035 "blobfs_create", 00:27:51.035 "blobfs_detect", 00:27:51.035 "blobfs_set_cache_size", 00:27:51.035 "bdev_xnvme_delete", 00:27:51.035 "bdev_xnvme_create", 00:27:51.035 "bdev_aio_delete", 00:27:51.035 "bdev_aio_rescan", 00:27:51.035 "bdev_aio_create", 00:27:51.035 "bdev_ftl_set_property", 00:27:51.035 "bdev_ftl_get_properties", 00:27:51.035 "bdev_ftl_get_stats", 00:27:51.035 "bdev_ftl_unmap", 00:27:51.035 "bdev_ftl_unload", 00:27:51.035 "bdev_ftl_delete", 00:27:51.035 "bdev_ftl_load", 00:27:51.035 "bdev_ftl_create", 00:27:51.035 "bdev_virtio_attach_controller", 00:27:51.035 "bdev_virtio_scsi_get_devices", 00:27:51.035 "bdev_virtio_detach_controller", 00:27:51.035 "bdev_virtio_blk_set_hotplug", 00:27:51.035 "bdev_iscsi_delete", 00:27:51.035 "bdev_iscsi_create", 00:27:51.035 "bdev_iscsi_set_options", 00:27:51.035 "accel_error_inject_error", 00:27:51.035 "ioat_scan_accel_module", 00:27:51.035 "dsa_scan_accel_module", 00:27:51.035 "iaa_scan_accel_module", 00:27:51.035 "keyring_file_remove_key", 00:27:51.035 "keyring_file_add_key", 00:27:51.035 "keyring_linux_set_options", 00:27:51.035 "fsdev_aio_delete", 00:27:51.035 "fsdev_aio_create", 00:27:51.035 "iscsi_get_histogram", 00:27:51.035 "iscsi_enable_histogram", 00:27:51.035 "iscsi_set_options", 00:27:51.035 "iscsi_get_auth_groups", 00:27:51.035 "iscsi_auth_group_remove_secret", 00:27:51.035 "iscsi_auth_group_add_secret", 00:27:51.035 "iscsi_delete_auth_group", 00:27:51.035 "iscsi_create_auth_group", 00:27:51.035 "iscsi_set_discovery_auth", 00:27:51.035 "iscsi_get_options", 00:27:51.035 "iscsi_target_node_request_logout", 00:27:51.035 "iscsi_target_node_set_redirect", 00:27:51.035 "iscsi_target_node_set_auth", 00:27:51.035 "iscsi_target_node_add_lun", 00:27:51.035 "iscsi_get_stats", 00:27:51.035 "iscsi_get_connections", 00:27:51.035 "iscsi_portal_group_set_auth", 00:27:51.035 "iscsi_start_portal_group", 00:27:51.035 "iscsi_delete_portal_group", 00:27:51.035 "iscsi_create_portal_group", 00:27:51.035 "iscsi_get_portal_groups", 00:27:51.035 "iscsi_delete_target_node", 00:27:51.035 "iscsi_target_node_remove_pg_ig_maps", 00:27:51.035 "iscsi_target_node_add_pg_ig_maps", 00:27:51.035 "iscsi_create_target_node", 00:27:51.035 "iscsi_get_target_nodes", 00:27:51.035 "iscsi_delete_initiator_group", 00:27:51.035 "iscsi_initiator_group_remove_initiators", 00:27:51.035 "iscsi_initiator_group_add_initiators", 00:27:51.035 "iscsi_create_initiator_group", 00:27:51.035 "iscsi_get_initiator_groups", 00:27:51.035 "nvmf_set_crdt", 00:27:51.035 "nvmf_set_config", 00:27:51.035 "nvmf_set_max_subsystems", 00:27:51.035 "nvmf_stop_mdns_prr", 00:27:51.035 "nvmf_publish_mdns_prr", 00:27:51.035 "nvmf_subsystem_get_listeners", 00:27:51.035 "nvmf_subsystem_get_qpairs", 00:27:51.035 "nvmf_subsystem_get_controllers", 00:27:51.035 "nvmf_get_stats", 00:27:51.035 "nvmf_get_transports", 00:27:51.035 "nvmf_create_transport", 00:27:51.035 "nvmf_get_targets", 00:27:51.035 "nvmf_delete_target", 00:27:51.035 "nvmf_create_target", 00:27:51.035 "nvmf_subsystem_allow_any_host", 00:27:51.035 "nvmf_subsystem_set_keys", 00:27:51.035 "nvmf_subsystem_remove_host", 00:27:51.035 "nvmf_subsystem_add_host", 00:27:51.035 "nvmf_ns_remove_host", 00:27:51.035 "nvmf_ns_add_host", 00:27:51.035 "nvmf_subsystem_remove_ns", 00:27:51.035 "nvmf_subsystem_set_ns_ana_group", 00:27:51.035 "nvmf_subsystem_add_ns", 00:27:51.035 "nvmf_subsystem_listener_set_ana_state", 00:27:51.035 "nvmf_discovery_get_referrals", 00:27:51.035 "nvmf_discovery_remove_referral", 00:27:51.035 "nvmf_discovery_add_referral", 00:27:51.035 "nvmf_subsystem_remove_listener", 00:27:51.035 "nvmf_subsystem_add_listener", 00:27:51.035 "nvmf_delete_subsystem", 00:27:51.035 "nvmf_create_subsystem", 00:27:51.035 "nvmf_get_subsystems", 00:27:51.035 "env_dpdk_get_mem_stats", 00:27:51.035 "nbd_get_disks", 00:27:51.035 "nbd_stop_disk", 00:27:51.035 "nbd_start_disk", 00:27:51.035 "ublk_recover_disk", 00:27:51.035 "ublk_get_disks", 00:27:51.035 "ublk_stop_disk", 00:27:51.035 "ublk_start_disk", 00:27:51.035 "ublk_destroy_target", 00:27:51.035 "ublk_create_target", 00:27:51.035 "virtio_blk_create_transport", 00:27:51.035 "virtio_blk_get_transports", 00:27:51.035 "vhost_controller_set_coalescing", 00:27:51.035 "vhost_get_controllers", 00:27:51.035 "vhost_delete_controller", 00:27:51.035 "vhost_create_blk_controller", 00:27:51.035 "vhost_scsi_controller_remove_target", 00:27:51.035 "vhost_scsi_controller_add_target", 00:27:51.035 "vhost_start_scsi_controller", 00:27:51.035 "vhost_create_scsi_controller", 00:27:51.035 "thread_set_cpumask", 00:27:51.035 "scheduler_set_options", 00:27:51.035 "framework_get_governor", 00:27:51.035 "framework_get_scheduler", 00:27:51.035 "framework_set_scheduler", 00:27:51.035 "framework_get_reactors", 00:27:51.035 "thread_get_io_channels", 00:27:51.035 "thread_get_pollers", 00:27:51.035 "thread_get_stats", 00:27:51.035 "framework_monitor_context_switch", 00:27:51.035 "spdk_kill_instance", 00:27:51.035 "log_enable_timestamps", 00:27:51.035 "log_get_flags", 00:27:51.035 "log_clear_flag", 00:27:51.035 "log_set_flag", 00:27:51.035 "log_get_level", 00:27:51.035 "log_set_level", 00:27:51.035 "log_get_print_level", 00:27:51.035 "log_set_print_level", 00:27:51.035 "framework_enable_cpumask_locks", 00:27:51.035 "framework_disable_cpumask_locks", 00:27:51.035 "framework_wait_init", 00:27:51.035 "framework_start_init", 00:27:51.035 "scsi_get_devices", 00:27:51.035 "bdev_get_histogram", 00:27:51.035 "bdev_enable_histogram", 00:27:51.035 "bdev_set_qos_limit", 00:27:51.035 "bdev_set_qd_sampling_period", 00:27:51.035 "bdev_get_bdevs", 00:27:51.035 "bdev_reset_iostat", 00:27:51.035 "bdev_get_iostat", 00:27:51.035 "bdev_examine", 00:27:51.035 "bdev_wait_for_examine", 00:27:51.035 "bdev_set_options", 00:27:51.035 "accel_get_stats", 00:27:51.035 "accel_set_options", 00:27:51.035 "accel_set_driver", 00:27:51.035 "accel_crypto_key_destroy", 00:27:51.035 "accel_crypto_keys_get", 00:27:51.035 "accel_crypto_key_create", 00:27:51.035 "accel_assign_opc", 00:27:51.035 "accel_get_module_info", 00:27:51.035 "accel_get_opc_assignments", 00:27:51.035 "vmd_rescan", 00:27:51.035 "vmd_remove_device", 00:27:51.035 "vmd_enable", 00:27:51.035 "sock_get_default_impl", 00:27:51.035 "sock_set_default_impl", 00:27:51.035 "sock_impl_set_options", 00:27:51.035 "sock_impl_get_options", 00:27:51.035 "iobuf_get_stats", 00:27:51.035 "iobuf_set_options", 00:27:51.035 "keyring_get_keys", 00:27:51.035 "framework_get_pci_devices", 00:27:51.035 "framework_get_config", 00:27:51.035 "framework_get_subsystems", 00:27:51.035 "fsdev_set_opts", 00:27:51.035 "fsdev_get_opts", 00:27:51.035 "trace_get_info", 00:27:51.035 "trace_get_tpoint_group_mask", 00:27:51.035 "trace_disable_tpoint_group", 00:27:51.035 "trace_enable_tpoint_group", 00:27:51.035 "trace_clear_tpoint_mask", 00:27:51.035 "trace_set_tpoint_mask", 00:27:51.035 "notify_get_notifications", 00:27:51.035 "notify_get_types", 00:27:51.035 "spdk_get_version", 00:27:51.035 "rpc_get_methods" 00:27:51.035 ] 00:27:51.296 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:51.296 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:51.296 23:09:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58001 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58001 ']' 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58001 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58001 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58001' 00:27:51.296 killing process with pid 58001 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58001 00:27:51.296 23:09:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58001 00:27:52.683 00:27:52.683 real 0m2.878s 00:27:52.683 user 0m5.182s 00:27:52.683 sys 0m0.439s 00:27:52.683 23:09:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.683 23:09:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:52.683 ************************************ 00:27:52.683 END TEST spdkcli_tcp 00:27:52.683 ************************************ 00:27:52.683 23:09:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:52.683 23:09:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.683 23:09:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.683 23:09:33 -- common/autotest_common.sh@10 -- # set +x 00:27:52.683 ************************************ 00:27:52.683 START TEST dpdk_mem_utility 00:27:52.683 ************************************ 00:27:52.683 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:52.944 * Looking for test storage... 00:27:52.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:27:52.944 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:27:52.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.945 23:09:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:52.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.945 --rc genhtml_branch_coverage=1 00:27:52.945 --rc genhtml_function_coverage=1 00:27:52.945 --rc genhtml_legend=1 00:27:52.945 --rc geninfo_all_blocks=1 00:27:52.945 --rc geninfo_unexecuted_blocks=1 00:27:52.945 00:27:52.945 ' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:52.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.945 --rc genhtml_branch_coverage=1 00:27:52.945 --rc genhtml_function_coverage=1 00:27:52.945 --rc genhtml_legend=1 00:27:52.945 --rc geninfo_all_blocks=1 00:27:52.945 --rc geninfo_unexecuted_blocks=1 00:27:52.945 00:27:52.945 ' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:52.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.945 --rc genhtml_branch_coverage=1 00:27:52.945 --rc genhtml_function_coverage=1 00:27:52.945 --rc genhtml_legend=1 00:27:52.945 --rc geninfo_all_blocks=1 00:27:52.945 --rc geninfo_unexecuted_blocks=1 00:27:52.945 00:27:52.945 ' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:52.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.945 --rc genhtml_branch_coverage=1 00:27:52.945 --rc genhtml_function_coverage=1 00:27:52.945 --rc genhtml_legend=1 00:27:52.945 --rc geninfo_all_blocks=1 00:27:52.945 --rc geninfo_unexecuted_blocks=1 00:27:52.945 00:27:52.945 ' 00:27:52.945 23:09:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:52.945 23:09:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58112 00:27:52.945 23:09:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58112 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58112 ']' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.945 23:09:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:52.945 23:09:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:52.945 [2024-12-09 23:09:33.501357] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:52.945 [2024-12-09 23:09:33.501636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58112 ] 00:27:53.207 [2024-12-09 23:09:33.660968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.207 [2024-12-09 23:09:33.762414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.779 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.779 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:27:53.779 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:27:53.779 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:27:53.779 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.779 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:53.779 { 00:27:53.779 "filename": "/tmp/spdk_mem_dump.txt" 00:27:53.779 } 00:27:53.779 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.779 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:54.042 DPDK memory size 824.000000 MiB in 1 heap(s) 00:27:54.042 1 heaps totaling size 824.000000 MiB 00:27:54.042 size: 824.000000 MiB heap id: 0 00:27:54.042 end heaps---------- 00:27:54.042 9 mempools totaling size 603.782043 MiB 00:27:54.042 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:27:54.042 size: 158.602051 MiB name: PDU_data_out_Pool 00:27:54.042 size: 100.555481 MiB name: bdev_io_58112 00:27:54.042 size: 50.003479 MiB name: msgpool_58112 00:27:54.042 size: 36.509338 MiB name: fsdev_io_58112 00:27:54.042 size: 21.763794 MiB name: PDU_Pool 00:27:54.042 size: 19.513306 MiB name: SCSI_TASK_Pool 00:27:54.042 size: 4.133484 MiB name: evtpool_58112 00:27:54.042 size: 0.026123 MiB name: Session_Pool 00:27:54.042 end mempools------- 00:27:54.042 6 memzones totaling size 4.142822 MiB 00:27:54.042 size: 1.000366 MiB name: RG_ring_0_58112 00:27:54.042 size: 1.000366 MiB name: RG_ring_1_58112 00:27:54.042 size: 1.000366 MiB name: RG_ring_4_58112 00:27:54.042 size: 1.000366 MiB name: RG_ring_5_58112 00:27:54.042 size: 0.125366 MiB name: RG_ring_2_58112 00:27:54.042 size: 0.015991 MiB name: RG_ring_3_58112 00:27:54.042 end memzones------- 00:27:54.042 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:27:54.042 heap id: 0 total size: 824.000000 MiB number of busy elements: 330 number of free elements: 18 00:27:54.042 list of free elements. size: 16.777710 MiB 00:27:54.042 element at address: 0x200006400000 with size: 1.995972 MiB 00:27:54.042 element at address: 0x20000a600000 with size: 1.995972 MiB 00:27:54.042 element at address: 0x200003e00000 with size: 1.991028 MiB 00:27:54.042 element at address: 0x200019500040 with size: 0.999939 MiB 00:27:54.042 element at address: 0x200019900040 with size: 0.999939 MiB 00:27:54.042 element at address: 0x200019a00000 with size: 0.999084 MiB 00:27:54.042 element at address: 0x200032600000 with size: 0.994324 MiB 00:27:54.042 element at address: 0x200000400000 with size: 0.992004 MiB 00:27:54.042 element at address: 0x200019200000 with size: 0.959656 MiB 00:27:54.042 element at address: 0x200019d00040 with size: 0.936401 MiB 00:27:54.042 element at address: 0x200000200000 with size: 0.716980 MiB 00:27:54.042 element at address: 0x20001b400000 with size: 0.558777 MiB 00:27:54.042 element at address: 0x200000c00000 with size: 0.489197 MiB 00:27:54.042 element at address: 0x200019600000 with size: 0.488464 MiB 00:27:54.042 element at address: 0x200019e00000 with size: 0.485413 MiB 00:27:54.042 element at address: 0x200012c00000 with size: 0.433228 MiB 00:27:54.042 element at address: 0x200028800000 with size: 0.390442 MiB 00:27:54.042 element at address: 0x200000800000 with size: 0.350891 MiB 00:27:54.042 list of standard malloc elements. size: 199.291382 MiB 00:27:54.042 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:27:54.042 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:27:54.042 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:27:54.042 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:27:54.042 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:27:54.042 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:27:54.042 element at address: 0x200019deff40 with size: 0.062683 MiB 00:27:54.042 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:27:54.042 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:27:54.042 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:27:54.042 element at address: 0x200012bff040 with size: 0.000305 MiB 00:27:54.042 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:27:54.042 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:27:54.043 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200000cff000 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff180 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff280 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff380 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff480 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff580 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff680 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff780 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff880 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bff980 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200019affc40 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f0c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:27:54.043 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:27:54.044 element at address: 0x200028863f40 with size: 0.000244 MiB 00:27:54.044 element at address: 0x200028864040 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886af80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b080 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b180 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b280 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b380 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b480 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b580 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b680 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b780 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b880 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886b980 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886be80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c080 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c180 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c280 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c380 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c480 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c580 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c680 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c780 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c880 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886c980 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d080 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d180 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d280 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d380 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d480 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d580 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d680 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d780 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d880 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886d980 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886da80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886db80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886de80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886df80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e080 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e180 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e280 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e380 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e480 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e580 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e680 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e780 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e880 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886e980 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f080 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f180 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f280 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f380 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f480 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f580 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f680 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f780 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f880 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886f980 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:27:54.044 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:27:54.044 list of memzone associated elements. size: 607.930908 MiB 00:27:54.044 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:27:54.044 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:27:54.044 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:27:54.044 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:27:54.044 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:27:54.044 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58112_0 00:27:54.044 element at address: 0x200000dff340 with size: 48.003113 MiB 00:27:54.044 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58112_0 00:27:54.044 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:27:54.044 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58112_0 00:27:54.044 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:27:54.044 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:27:54.044 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:27:54.044 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:27:54.044 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:27:54.044 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58112_0 00:27:54.044 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:27:54.044 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58112 00:27:54.044 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:27:54.044 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58112 00:27:54.044 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:27:54.044 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:27:54.044 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:27:54.044 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:27:54.044 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:27:54.044 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:27:54.044 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:27:54.044 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:27:54.044 element at address: 0x200000cff100 with size: 1.000549 MiB 00:27:54.044 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58112 00:27:54.044 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:27:54.044 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58112 00:27:54.044 element at address: 0x200019affd40 with size: 1.000549 MiB 00:27:54.044 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58112 00:27:54.044 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:27:54.044 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58112 00:27:54.044 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:27:54.044 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58112 00:27:54.044 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:27:54.044 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58112 00:27:54.044 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:27:54.044 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:27:54.044 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:27:54.044 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:27:54.044 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:27:54.045 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:27:54.045 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:27:54.045 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58112 00:27:54.045 element at address: 0x20000085df80 with size: 0.125549 MiB 00:27:54.045 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58112 00:27:54.045 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:27:54.045 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:27:54.045 element at address: 0x200028864140 with size: 0.023804 MiB 00:27:54.045 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:27:54.045 element at address: 0x200000859d40 with size: 0.016174 MiB 00:27:54.045 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58112 00:27:54.045 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:27:54.045 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:27:54.045 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:27:54.045 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58112 00:27:54.045 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:27:54.045 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58112 00:27:54.045 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:27:54.045 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58112 00:27:54.045 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:27:54.045 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:27:54.045 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:27:54.045 23:09:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58112 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58112 ']' 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58112 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58112 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58112' 00:27:54.045 killing process with pid 58112 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58112 00:27:54.045 23:09:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58112 00:27:55.431 00:27:55.431 real 0m2.726s 00:27:55.431 user 0m2.746s 00:27:55.431 sys 0m0.403s 00:27:55.431 23:09:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.431 23:09:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:55.431 ************************************ 00:27:55.431 END TEST dpdk_mem_utility 00:27:55.431 ************************************ 00:27:55.431 23:09:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:55.431 23:09:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:55.431 23:09:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.431 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:27:55.431 ************************************ 00:27:55.431 START TEST event 00:27:55.431 ************************************ 00:27:55.431 23:09:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:55.697 * Looking for test storage... 00:27:55.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.697 23:09:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.697 23:09:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.697 23:09:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.697 23:09:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.697 23:09:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.697 23:09:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.697 23:09:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.697 23:09:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.697 23:09:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.697 23:09:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.697 23:09:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.697 23:09:36 event -- scripts/common.sh@344 -- # case "$op" in 00:27:55.697 23:09:36 event -- scripts/common.sh@345 -- # : 1 00:27:55.697 23:09:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.697 23:09:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.697 23:09:36 event -- scripts/common.sh@365 -- # decimal 1 00:27:55.697 23:09:36 event -- scripts/common.sh@353 -- # local d=1 00:27:55.697 23:09:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.697 23:09:36 event -- scripts/common.sh@355 -- # echo 1 00:27:55.697 23:09:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.697 23:09:36 event -- scripts/common.sh@366 -- # decimal 2 00:27:55.697 23:09:36 event -- scripts/common.sh@353 -- # local d=2 00:27:55.697 23:09:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.697 23:09:36 event -- scripts/common.sh@355 -- # echo 2 00:27:55.697 23:09:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.697 23:09:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.697 23:09:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.697 23:09:36 event -- scripts/common.sh@368 -- # return 0 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.697 --rc genhtml_branch_coverage=1 00:27:55.697 --rc genhtml_function_coverage=1 00:27:55.697 --rc genhtml_legend=1 00:27:55.697 --rc geninfo_all_blocks=1 00:27:55.697 --rc geninfo_unexecuted_blocks=1 00:27:55.697 00:27:55.697 ' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.697 --rc genhtml_branch_coverage=1 00:27:55.697 --rc genhtml_function_coverage=1 00:27:55.697 --rc genhtml_legend=1 00:27:55.697 --rc geninfo_all_blocks=1 00:27:55.697 --rc geninfo_unexecuted_blocks=1 00:27:55.697 00:27:55.697 ' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.697 --rc genhtml_branch_coverage=1 00:27:55.697 --rc genhtml_function_coverage=1 00:27:55.697 --rc genhtml_legend=1 00:27:55.697 --rc geninfo_all_blocks=1 00:27:55.697 --rc geninfo_unexecuted_blocks=1 00:27:55.697 00:27:55.697 ' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.697 --rc genhtml_branch_coverage=1 00:27:55.697 --rc genhtml_function_coverage=1 00:27:55.697 --rc genhtml_legend=1 00:27:55.697 --rc geninfo_all_blocks=1 00:27:55.697 --rc geninfo_unexecuted_blocks=1 00:27:55.697 00:27:55.697 ' 00:27:55.697 23:09:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:55.697 23:09:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:27:55.697 23:09:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:55.697 23:09:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.697 23:09:36 event -- common/autotest_common.sh@10 -- # set +x 00:27:55.697 ************************************ 00:27:55.697 START TEST event_perf 00:27:55.697 ************************************ 00:27:55.697 23:09:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:55.697 Running I/O for 1 seconds...[2024-12-09 23:09:36.249388] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:55.697 [2024-12-09 23:09:36.249497] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:27:55.961 [2024-12-09 23:09:36.409121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.961 [2024-12-09 23:09:36.535170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.961 [2024-12-09 23:09:36.535268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.961 [2024-12-09 23:09:36.535300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.961 Running I/O for 1 seconds...[2024-12-09 23:09:36.535304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.359 00:27:57.359 lcore 0: 196679 00:27:57.359 lcore 1: 196681 00:27:57.359 lcore 2: 196683 00:27:57.359 lcore 3: 196678 00:27:57.359 done. 00:27:57.359 00:27:57.359 real 0m1.485s 00:27:57.359 user 0m4.287s 00:27:57.359 sys 0m0.077s 00:27:57.359 23:09:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.359 23:09:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:27:57.359 ************************************ 00:27:57.359 END TEST event_perf 00:27:57.359 ************************************ 00:27:57.359 23:09:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:57.359 23:09:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:57.359 23:09:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.359 23:09:37 event -- common/autotest_common.sh@10 -- # set +x 00:27:57.359 ************************************ 00:27:57.359 START TEST event_reactor 00:27:57.359 ************************************ 00:27:57.359 23:09:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:57.359 [2024-12-09 23:09:37.776165] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:57.359 [2024-12-09 23:09:37.776393] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58243 ] 00:27:57.359 [2024-12-09 23:09:37.934411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.621 [2024-12-09 23:09:38.078283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.005 test_start 00:27:59.005 oneshot 00:27:59.005 tick 100 00:27:59.005 tick 100 00:27:59.005 tick 250 00:27:59.005 tick 100 00:27:59.005 tick 100 00:27:59.005 tick 100 00:27:59.005 tick 250 00:27:59.005 tick 500 00:27:59.005 tick 100 00:27:59.005 tick 100 00:27:59.005 tick 250 00:27:59.005 tick 100 00:27:59.005 tick 100 00:27:59.005 test_end 00:27:59.005 00:27:59.005 real 0m1.521s 00:27:59.005 user 0m1.340s 00:27:59.005 sys 0m0.070s 00:27:59.005 23:09:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.005 ************************************ 00:27:59.005 END TEST event_reactor 00:27:59.005 ************************************ 00:27:59.005 23:09:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:27:59.005 23:09:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:59.005 23:09:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:59.005 23:09:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.006 23:09:39 event -- common/autotest_common.sh@10 -- # set +x 00:27:59.006 ************************************ 00:27:59.006 START TEST event_reactor_perf 00:27:59.006 ************************************ 00:27:59.006 23:09:39 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:59.006 [2024-12-09 23:09:39.337187] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:27:59.006 [2024-12-09 23:09:39.337308] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58285 ] 00:27:59.006 [2024-12-09 23:09:39.498912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.006 [2024-12-09 23:09:39.599723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.391 test_start 00:28:00.391 test_end 00:28:00.391 Performance: 316821 events per second 00:28:00.391 00:28:00.391 real 0m1.452s 00:28:00.391 user 0m1.268s 00:28:00.391 sys 0m0.075s 00:28:00.391 23:09:40 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.391 23:09:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:28:00.391 ************************************ 00:28:00.391 END TEST event_reactor_perf 00:28:00.391 ************************************ 00:28:00.391 23:09:40 event -- event/event.sh@49 -- # uname -s 00:28:00.391 23:09:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:28:00.391 23:09:40 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:28:00.391 23:09:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:00.391 23:09:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.391 23:09:40 event -- common/autotest_common.sh@10 -- # set +x 00:28:00.391 ************************************ 00:28:00.391 START TEST event_scheduler 00:28:00.391 ************************************ 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:28:00.391 * Looking for test storage... 00:28:00.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.391 23:09:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:00.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.391 --rc genhtml_branch_coverage=1 00:28:00.391 --rc genhtml_function_coverage=1 00:28:00.391 --rc genhtml_legend=1 00:28:00.391 --rc geninfo_all_blocks=1 00:28:00.391 --rc geninfo_unexecuted_blocks=1 00:28:00.391 00:28:00.391 ' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:00.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.391 --rc genhtml_branch_coverage=1 00:28:00.391 --rc genhtml_function_coverage=1 00:28:00.391 --rc genhtml_legend=1 00:28:00.391 --rc geninfo_all_blocks=1 00:28:00.391 --rc geninfo_unexecuted_blocks=1 00:28:00.391 00:28:00.391 ' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:00.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.391 --rc genhtml_branch_coverage=1 00:28:00.391 --rc genhtml_function_coverage=1 00:28:00.391 --rc genhtml_legend=1 00:28:00.391 --rc geninfo_all_blocks=1 00:28:00.391 --rc geninfo_unexecuted_blocks=1 00:28:00.391 00:28:00.391 ' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:00.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.391 --rc genhtml_branch_coverage=1 00:28:00.391 --rc genhtml_function_coverage=1 00:28:00.391 --rc genhtml_legend=1 00:28:00.391 --rc geninfo_all_blocks=1 00:28:00.391 --rc geninfo_unexecuted_blocks=1 00:28:00.391 00:28:00.391 ' 00:28:00.391 23:09:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:28:00.391 23:09:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58350 00:28:00.391 23:09:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:28:00.391 23:09:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58350 00:28:00.391 23:09:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.391 23:09:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:00.391 [2024-12-09 23:09:41.001945] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:00.391 [2024-12-09 23:09:41.002232] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:28:00.653 [2024-12-09 23:09:41.157508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.653 [2024-12-09 23:09:41.262253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.653 [2024-12-09 23:09:41.262566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.653 [2024-12-09 23:09:41.262690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.653 [2024-12-09 23:09:41.262709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:28:01.595 23:09:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:01.595 POWER: Cannot set governor of lcore 0 to userspace 00:28:01.595 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:01.595 POWER: Cannot set governor of lcore 0 to performance 00:28:01.595 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:01.595 POWER: Cannot set governor of lcore 0 to userspace 00:28:01.595 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:01.595 POWER: Cannot set governor of lcore 0 to userspace 00:28:01.595 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:28:01.595 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:28:01.595 POWER: Unable to set Power Management Environment for lcore 0 00:28:01.595 [2024-12-09 23:09:41.868298] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:28:01.595 [2024-12-09 23:09:41.868332] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:28:01.595 [2024-12-09 23:09:41.868498] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:28:01.595 [2024-12-09 23:09:41.868584] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:28:01.595 [2024-12-09 23:09:41.868608] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:28:01.595 [2024-12-09 23:09:41.868664] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 [2024-12-09 23:09:42.099313] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:28:01.595 23:09:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:28:01.595 23:09:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:01.595 23:09:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 ************************************ 00:28:01.595 START TEST scheduler_create_thread 00:28:01.595 ************************************ 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 2 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 3 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 4 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 5 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 6 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 7 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 8 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 9 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 10 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.595 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:02.537 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.537 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:28:02.537 23:09:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:28:02.537 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.537 23:09:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:03.505 ************************************ 00:28:03.505 END TEST scheduler_create_thread 00:28:03.505 ************************************ 00:28:03.505 23:09:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.505 00:28:03.505 real 0m1.753s 00:28:03.505 user 0m0.013s 00:28:03.505 sys 0m0.004s 00:28:03.505 23:09:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.505 23:09:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:03.505 23:09:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:03.505 23:09:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58350 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58350 ']' 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58350 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58350 00:28:03.505 killing process with pid 58350 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58350' 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58350 00:28:03.505 23:09:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58350 00:28:03.764 [2024-12-09 23:09:44.345125] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:28:04.707 ************************************ 00:28:04.707 END TEST event_scheduler 00:28:04.707 ************************************ 00:28:04.707 00:28:04.707 real 0m4.219s 00:28:04.707 user 0m7.120s 00:28:04.707 sys 0m0.324s 00:28:04.707 23:09:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.707 23:09:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:04.707 23:09:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:28:04.707 23:09:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:28:04.707 23:09:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.707 23:09:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.707 23:09:45 event -- common/autotest_common.sh@10 -- # set +x 00:28:04.707 ************************************ 00:28:04.707 START TEST app_repeat 00:28:04.707 ************************************ 00:28:04.707 23:09:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:28:04.707 Process app_repeat pid: 58449 00:28:04.707 spdk_app_start Round 0 00:28:04.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:04.707 23:09:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58449 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58449' 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:04.708 23:09:45 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.708 23:09:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:04.708 [2024-12-09 23:09:45.134529] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:04.708 [2024-12-09 23:09:45.134697] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58449 ] 00:28:04.708 [2024-12-09 23:09:45.307720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:04.968 [2024-12-09 23:09:45.413424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.968 [2024-12-09 23:09:45.413433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.550 23:09:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.550 23:09:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:28:05.550 23:09:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:05.814 Malloc0 00:28:05.814 23:09:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:06.076 Malloc1 00:28:06.076 23:09:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:06.076 23:09:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:06.338 /dev/nbd0 00:28:06.338 23:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:06.338 23:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:06.338 1+0 records in 00:28:06.338 1+0 records out 00:28:06.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193299 s, 21.2 MB/s 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:06.338 23:09:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:06.338 23:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.338 23:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:06.338 23:09:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:06.338 /dev/nbd1 00:28:06.600 23:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:06.600 23:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:06.600 1+0 records in 00:28:06.600 1+0 records out 00:28:06.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178367 s, 23.0 MB/s 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:06.600 23:09:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:06.600 23:09:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:06.600 23:09:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:06.600 { 00:28:06.600 "nbd_device": "/dev/nbd0", 00:28:06.600 "bdev_name": "Malloc0" 00:28:06.600 }, 00:28:06.600 { 00:28:06.600 "nbd_device": "/dev/nbd1", 00:28:06.600 "bdev_name": "Malloc1" 00:28:06.600 } 00:28:06.600 ]' 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:06.600 { 00:28:06.600 "nbd_device": "/dev/nbd0", 00:28:06.600 "bdev_name": "Malloc0" 00:28:06.600 }, 00:28:06.600 { 00:28:06.600 "nbd_device": "/dev/nbd1", 00:28:06.600 "bdev_name": "Malloc1" 00:28:06.600 } 00:28:06.600 ]' 00:28:06.600 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:06.862 /dev/nbd1' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:06.862 /dev/nbd1' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:06.862 256+0 records in 00:28:06.862 256+0 records out 00:28:06.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00818549 s, 128 MB/s 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:06.862 256+0 records in 00:28:06.862 256+0 records out 00:28:06.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192488 s, 54.5 MB/s 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:06.862 256+0 records in 00:28:06.862 256+0 records out 00:28:06.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224328 s, 46.7 MB/s 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:06.862 23:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.130 23:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:07.399 23:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:07.399 23:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:07.399 23:09:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:07.399 23:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:07.399 23:09:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:07.399 23:09:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:07.399 23:09:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:07.971 23:09:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:08.544 [2024-12-09 23:09:49.055285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.544 [2024-12-09 23:09:49.158579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.544 [2024-12-09 23:09:49.158603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.806 [2024-12-09 23:09:49.291447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:08.806 [2024-12-09 23:09:49.291517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:10.789 23:09:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:10.789 spdk_app_start Round 1 00:28:10.789 23:09:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:28:10.789 23:09:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.789 23:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:11.049 23:09:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.049 23:09:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:28:11.049 23:09:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:11.309 Malloc0 00:28:11.309 23:09:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:11.568 Malloc1 00:28:11.568 23:09:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:11.568 23:09:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:11.828 /dev/nbd0 00:28:11.828 23:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:11.828 23:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:11.828 1+0 records in 00:28:11.828 1+0 records out 00:28:11.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221642 s, 18.5 MB/s 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:11.828 23:09:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:11.828 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:11.828 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:11.828 23:09:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:12.090 /dev/nbd1 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:12.090 1+0 records in 00:28:12.090 1+0 records out 00:28:12.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295073 s, 13.9 MB/s 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:12.090 23:09:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:12.090 { 00:28:12.090 "nbd_device": "/dev/nbd0", 00:28:12.090 "bdev_name": "Malloc0" 00:28:12.090 }, 00:28:12.090 { 00:28:12.090 "nbd_device": "/dev/nbd1", 00:28:12.090 "bdev_name": "Malloc1" 00:28:12.090 } 00:28:12.090 ]' 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:12.090 { 00:28:12.090 "nbd_device": "/dev/nbd0", 00:28:12.090 "bdev_name": "Malloc0" 00:28:12.090 }, 00:28:12.090 { 00:28:12.090 "nbd_device": "/dev/nbd1", 00:28:12.090 "bdev_name": "Malloc1" 00:28:12.090 } 00:28:12.090 ]' 00:28:12.090 23:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:12.349 /dev/nbd1' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:12.349 /dev/nbd1' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:12.349 256+0 records in 00:28:12.349 256+0 records out 00:28:12.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0085615 s, 122 MB/s 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:12.349 256+0 records in 00:28:12.349 256+0 records out 00:28:12.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152483 s, 68.8 MB/s 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:12.349 256+0 records in 00:28:12.349 256+0 records out 00:28:12.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213951 s, 49.0 MB/s 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:12.349 23:09:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:12.350 23:09:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:12.350 23:09:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:12.350 23:09:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:12.610 23:09:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:12.873 23:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:13.134 23:09:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:13.134 23:09:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:13.393 23:09:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:13.963 [2024-12-09 23:09:54.552502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:14.264 [2024-12-09 23:09:54.649636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.264 [2024-12-09 23:09:54.649778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.264 [2024-12-09 23:09:54.771981] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:14.264 [2024-12-09 23:09:54.772057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:16.206 spdk_app_start Round 2 00:28:16.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:16.206 23:09:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:16.206 23:09:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:28:16.206 23:09:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.206 23:09:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:16.465 23:09:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.465 23:09:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:28:16.465 23:09:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:16.723 Malloc0 00:28:16.723 23:09:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:16.983 Malloc1 00:28:16.983 23:09:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:16.983 23:09:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:17.244 /dev/nbd0 00:28:17.244 23:09:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:17.244 23:09:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:17.244 1+0 records in 00:28:17.244 1+0 records out 00:28:17.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296249 s, 13.8 MB/s 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:17.244 23:09:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:17.244 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.244 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:17.244 23:09:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:17.506 /dev/nbd1 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:17.507 1+0 records in 00:28:17.507 1+0 records out 00:28:17.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201107 s, 20.4 MB/s 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:17.507 23:09:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.507 23:09:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:17.769 { 00:28:17.769 "nbd_device": "/dev/nbd0", 00:28:17.769 "bdev_name": "Malloc0" 00:28:17.769 }, 00:28:17.769 { 00:28:17.769 "nbd_device": "/dev/nbd1", 00:28:17.769 "bdev_name": "Malloc1" 00:28:17.769 } 00:28:17.769 ]' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:17.769 { 00:28:17.769 "nbd_device": "/dev/nbd0", 00:28:17.769 "bdev_name": "Malloc0" 00:28:17.769 }, 00:28:17.769 { 00:28:17.769 "nbd_device": "/dev/nbd1", 00:28:17.769 "bdev_name": "Malloc1" 00:28:17.769 } 00:28:17.769 ]' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:17.769 /dev/nbd1' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:17.769 /dev/nbd1' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:17.769 256+0 records in 00:28:17.769 256+0 records out 00:28:17.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769133 s, 136 MB/s 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:17.769 256+0 records in 00:28:17.769 256+0 records out 00:28:17.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170598 s, 61.5 MB/s 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:17.769 256+0 records in 00:28:17.769 256+0 records out 00:28:17.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165607 s, 63.3 MB/s 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:17.769 23:09:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.031 23:09:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:18.293 23:09:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:18.554 23:09:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:18.554 23:09:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:18.816 23:09:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:19.428 [2024-12-09 23:09:59.932504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:19.428 [2024-12-09 23:10:00.017310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.428 [2024-12-09 23:10:00.017318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.690 [2024-12-09 23:10:00.119208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:19.690 [2024-12-09 23:10:00.119278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:22.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:22.235 23:10:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:28:22.235 23:10:02 event.app_repeat -- event/event.sh@39 -- # killprocess 58449 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58449 ']' 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58449 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58449 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58449' 00:28:22.235 killing process with pid 58449 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58449 00:28:22.235 23:10:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58449 00:28:22.497 spdk_app_start is called in Round 0. 00:28:22.497 Shutdown signal received, stop current app iteration 00:28:22.497 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:28:22.497 spdk_app_start is called in Round 1. 00:28:22.497 Shutdown signal received, stop current app iteration 00:28:22.497 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:28:22.497 spdk_app_start is called in Round 2. 00:28:22.497 Shutdown signal received, stop current app iteration 00:28:22.497 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:28:22.497 spdk_app_start is called in Round 3. 00:28:22.497 Shutdown signal received, stop current app iteration 00:28:22.759 23:10:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:28:22.759 23:10:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:28:22.759 00:28:22.759 real 0m18.057s 00:28:22.759 user 0m39.423s 00:28:22.759 sys 0m2.199s 00:28:22.759 23:10:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.759 23:10:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:22.759 ************************************ 00:28:22.759 END TEST app_repeat 00:28:22.759 ************************************ 00:28:22.759 23:10:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:28:22.759 23:10:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:22.759 23:10:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.759 23:10:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.759 23:10:03 event -- common/autotest_common.sh@10 -- # set +x 00:28:22.759 ************************************ 00:28:22.759 START TEST cpu_locks 00:28:22.759 ************************************ 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:22.759 * Looking for test storage... 00:28:22.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.759 23:10:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:22.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.759 --rc genhtml_branch_coverage=1 00:28:22.759 --rc genhtml_function_coverage=1 00:28:22.759 --rc genhtml_legend=1 00:28:22.759 --rc geninfo_all_blocks=1 00:28:22.759 --rc geninfo_unexecuted_blocks=1 00:28:22.759 00:28:22.759 ' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:22.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.759 --rc genhtml_branch_coverage=1 00:28:22.759 --rc genhtml_function_coverage=1 00:28:22.759 --rc genhtml_legend=1 00:28:22.759 --rc geninfo_all_blocks=1 00:28:22.759 --rc geninfo_unexecuted_blocks=1 00:28:22.759 00:28:22.759 ' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:22.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.759 --rc genhtml_branch_coverage=1 00:28:22.759 --rc genhtml_function_coverage=1 00:28:22.759 --rc genhtml_legend=1 00:28:22.759 --rc geninfo_all_blocks=1 00:28:22.759 --rc geninfo_unexecuted_blocks=1 00:28:22.759 00:28:22.759 ' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:22.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.759 --rc genhtml_branch_coverage=1 00:28:22.759 --rc genhtml_function_coverage=1 00:28:22.759 --rc genhtml_legend=1 00:28:22.759 --rc geninfo_all_blocks=1 00:28:22.759 --rc geninfo_unexecuted_blocks=1 00:28:22.759 00:28:22.759 ' 00:28:22.759 23:10:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:28:22.759 23:10:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:28:22.759 23:10:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:28:22.759 23:10:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.759 23:10:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:22.759 ************************************ 00:28:22.759 START TEST default_locks 00:28:22.759 ************************************ 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58881 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58881 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58881 ']' 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:22.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.759 23:10:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:23.020 [2024-12-09 23:10:03.405170] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:23.020 [2024-12-09 23:10:03.405331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58881 ] 00:28:23.020 [2024-12-09 23:10:03.560611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.020 [2024-12-09 23:10:03.648831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58881 ']' 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.966 killing process with pid 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58881' 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58881 00:28:23.966 23:10:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58881 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58881 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58881 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58881 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58881 ']' 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.351 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.351 ERROR: process (pid: 58881) is no longer running 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:25.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58881) - No such process 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:25.352 ************************************ 00:28:25.352 END TEST default_locks 00:28:25.352 ************************************ 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:25.352 00:28:25.352 real 0m2.415s 00:28:25.352 user 0m2.436s 00:28:25.352 sys 0m0.457s 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.352 23:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:25.352 23:10:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:28:25.352 23:10:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:25.352 23:10:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.352 23:10:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:25.352 ************************************ 00:28:25.352 START TEST default_locks_via_rpc 00:28:25.352 ************************************ 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:28:25.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58945 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58945 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58945 ']' 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.352 23:10:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:25.352 [2024-12-09 23:10:05.855453] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:25.352 [2024-12-09 23:10:05.855578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:28:25.613 [2024-12-09 23:10:06.016461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.613 [2024-12-09 23:10:06.116858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58945 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58945 00:28:26.190 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58945 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58945 ']' 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58945 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58945 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.451 killing process with pid 58945 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58945' 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58945 00:28:26.451 23:10:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58945 00:28:27.833 00:28:27.833 real 0m2.683s 00:28:27.833 user 0m2.691s 00:28:27.833 sys 0m0.429s 00:28:27.834 23:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.834 23:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 ************************************ 00:28:27.834 END TEST default_locks_via_rpc 00:28:27.834 ************************************ 00:28:28.095 23:10:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:28:28.095 23:10:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.095 23:10:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.095 23:10:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:28.095 ************************************ 00:28:28.095 START TEST non_locking_app_on_locked_coremask 00:28:28.095 ************************************ 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58997 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58997 /var/tmp/spdk.sock 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58997 ']' 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.095 23:10:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:28.095 [2024-12-09 23:10:08.571790] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:28.095 [2024-12-09 23:10:08.571892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58997 ] 00:28:28.095 [2024-12-09 23:10:08.721852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.355 [2024-12-09 23:10:08.806326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59013 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59013 /var/tmp/spdk2.sock 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59013 ']' 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:28.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.927 23:10:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:28.927 [2024-12-09 23:10:09.491104] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:28.927 [2024-12-09 23:10:09.491227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59013 ] 00:28:29.189 [2024-12-09 23:10:09.661752] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:29.189 [2024-12-09 23:10:09.661814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.451 [2024-12-09 23:10:09.860859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58997 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58997 00:28:30.843 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58997 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58997 ']' 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58997 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58997 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.844 killing process with pid 58997 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58997' 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58997 00:28:30.844 23:10:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58997 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59013 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59013 ']' 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59013 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59013 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.396 killing process with pid 59013 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59013' 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59013 00:28:33.396 23:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59013 00:28:34.803 00:28:34.803 real 0m6.630s 00:28:34.803 user 0m6.901s 00:28:34.803 sys 0m0.793s 00:28:34.803 23:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.803 ************************************ 00:28:34.803 END TEST non_locking_app_on_locked_coremask 00:28:34.803 ************************************ 00:28:34.803 23:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:34.803 23:10:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:28:34.803 23:10:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:34.803 23:10:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.803 23:10:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:34.803 ************************************ 00:28:34.803 START TEST locking_app_on_unlocked_coremask 00:28:34.803 ************************************ 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59115 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59115 /var/tmp/spdk.sock 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59115 ']' 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:34.803 23:10:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:28:34.803 [2024-12-09 23:10:15.251554] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:34.803 [2024-12-09 23:10:15.251680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:28:34.803 [2024-12-09 23:10:15.406723] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:34.803 [2024-12-09 23:10:15.406784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.068 [2024-12-09 23:10:15.492628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59130 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59130 /var/tmp/spdk2.sock 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.640 23:10:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:35.640 [2024-12-09 23:10:16.158628] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:35.640 [2024-12-09 23:10:16.158744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:28:35.900 [2024-12-09 23:10:16.324477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.900 [2024-12-09 23:10:16.490907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.844 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.844 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:36.844 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59130 00:28:36.844 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:28:36.844 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59115 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59115 ']' 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59115 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59115 00:28:37.415 killing process with pid 59115 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59115' 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59115 00:28:37.415 23:10:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59115 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59130 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:28:39.997 killing process with pid 59130 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:28:39.997 23:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:28:41.000 00:28:41.000 real 0m6.280s 00:28:41.000 user 0m6.541s 00:28:41.000 sys 0m0.830s 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:41.000 ************************************ 00:28:41.000 END TEST locking_app_on_unlocked_coremask 00:28:41.000 ************************************ 00:28:41.000 23:10:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:41.000 23:10:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:41.000 23:10:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.000 23:10:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:41.000 ************************************ 00:28:41.000 START TEST locking_app_on_locked_coremask 00:28:41.000 ************************************ 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59222 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59222 /var/tmp/spdk.sock 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.000 23:10:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:41.000 [2024-12-09 23:10:21.559947] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:41.000 [2024-12-09 23:10:21.560235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:28:41.261 [2024-12-09 23:10:21.711507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.261 [2024-12-09 23:10:21.796241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59238 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59238 /var/tmp/spdk2.sock 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59238 /var/tmp/spdk2.sock 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:41.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59238 /var/tmp/spdk2.sock 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.832 23:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:41.832 [2024-12-09 23:10:22.444732] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:41.832 [2024-12-09 23:10:22.445019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:28:42.093 [2024-12-09 23:10:22.609381] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59222 has claimed it. 00:28:42.093 [2024-12-09 23:10:22.609445] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:42.665 ERROR: process (pid: 59238) is no longer running 00:28:42.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59238) - No such process 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:42.665 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59222 ']' 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59222 00:28:42.666 killing process with pid 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59222' 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59222 00:28:42.666 23:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59222 00:28:44.593 00:28:44.593 real 0m3.245s 00:28:44.593 user 0m3.425s 00:28:44.593 sys 0m0.533s 00:28:44.593 ************************************ 00:28:44.593 END TEST locking_app_on_locked_coremask 00:28:44.593 ************************************ 00:28:44.593 23:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.593 23:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 23:10:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:44.593 23:10:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:44.593 23:10:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.593 23:10:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 ************************************ 00:28:44.593 START TEST locking_overlapped_coremask 00:28:44.593 ************************************ 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59291 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59291 /var/tmp/spdk.sock 00:28:44.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59291 ']' 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 23:10:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:44.593 [2024-12-09 23:10:24.856373] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:44.593 [2024-12-09 23:10:24.856509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:28:44.593 [2024-12-09 23:10:25.014496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.593 [2024-12-09 23:10:25.119279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.593 [2024-12-09 23:10:25.119688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.593 [2024-12-09 23:10:25.119745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59309 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59309 /var/tmp/spdk2.sock 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59309 /var/tmp/spdk2.sock 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59309 ']' 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.223 23:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 [2024-12-09 23:10:25.794328] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:45.223 [2024-12-09 23:10:25.794451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:28:45.482 [2024-12-09 23:10:25.968450] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59291 has claimed it. 00:28:45.482 [2024-12-09 23:10:25.968522] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:46.052 ERROR: process (pid: 59309) is no longer running 00:28:46.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59309) - No such process 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59291 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59291 ']' 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59291 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59291 00:28:46.052 killing process with pid 59291 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59291' 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59291 00:28:46.052 23:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59291 00:28:47.435 ************************************ 00:28:47.435 END TEST locking_overlapped_coremask 00:28:47.435 00:28:47.435 real 0m3.127s 00:28:47.435 user 0m8.532s 00:28:47.435 sys 0m0.431s 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:47.435 ************************************ 00:28:47.435 23:10:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:28:47.435 23:10:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.435 23:10:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.435 23:10:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:47.435 ************************************ 00:28:47.435 START TEST locking_overlapped_coremask_via_rpc 00:28:47.435 ************************************ 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59362 00:28:47.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59362 ']' 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.435 23:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:47.435 [2024-12-09 23:10:28.046818] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:47.435 [2024-12-09 23:10:28.046979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:28:47.696 [2024-12-09 23:10:28.218484] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:47.696 [2024-12-09 23:10:28.218536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.696 [2024-12-09 23:10:28.324555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.696 [2024-12-09 23:10:28.324855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.696 [2024-12-09 23:10:28.324925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59380 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59380 ']' 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:48.639 23:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:28:48.639 [2024-12-09 23:10:28.998199] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:48.639 [2024-12-09 23:10:28.998320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:28:48.639 [2024-12-09 23:10:29.161939] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:48.639 [2024-12-09 23:10:29.161988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.900 [2024-12-09 23:10:29.337294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.900 [2024-12-09 23:10:29.341064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.900 [2024-12-09 23:10:29.341088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:49.845 [2024-12-09 23:10:30.315125] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59362 has claimed it. 00:28:49.845 request: 00:28:49.845 { 00:28:49.845 "method": "framework_enable_cpumask_locks", 00:28:49.845 "req_id": 1 00:28:49.845 } 00:28:49.845 Got JSON-RPC error response 00:28:49.845 response: 00:28:49.845 { 00:28:49.845 "code": -32603, 00:28:49.845 "message": "Failed to claim CPU core: 2" 00:28:49.845 } 00:28:49.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59362 ']' 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.845 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59380 ']' 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.107 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:50.367 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.367 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:50.367 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:28:50.367 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:50.367 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:50.368 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:50.368 00:28:50.368 real 0m2.811s 00:28:50.368 user 0m1.090s 00:28:50.368 sys 0m0.125s 00:28:50.368 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.368 23:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:50.368 ************************************ 00:28:50.368 END TEST locking_overlapped_coremask_via_rpc 00:28:50.368 ************************************ 00:28:50.368 23:10:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:28:50.368 23:10:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:28:50.368 23:10:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59362 ']' 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59362 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59362 00:28:50.368 killing process with pid 59362 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59362' 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59362 00:28:50.368 23:10:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59362 00:28:51.759 23:10:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:28:51.759 23:10:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59380 ']' 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59380 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59380 00:28:51.759 killing process with pid 59380 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59380' 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59380 00:28:51.759 23:10:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59380 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59362 ']' 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59362 00:28:53.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59362) - No such process 00:28:53.156 Process with pid 59362 is not found 00:28:53.156 Process with pid 59380 is not found 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59362 is not found' 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59380 ']' 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59380 00:28:53.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59380) - No such process 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59380 is not found' 00:28:53.156 23:10:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:53.156 ************************************ 00:28:53.156 END TEST cpu_locks 00:28:53.156 ************************************ 00:28:53.156 00:28:53.156 real 0m30.419s 00:28:53.156 user 0m52.841s 00:28:53.156 sys 0m4.396s 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.156 23:10:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:53.156 ************************************ 00:28:53.156 END TEST event 00:28:53.156 ************************************ 00:28:53.156 00:28:53.156 real 0m57.573s 00:28:53.156 user 1m46.471s 00:28:53.156 sys 0m7.355s 00:28:53.156 23:10:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.156 23:10:33 event -- common/autotest_common.sh@10 -- # set +x 00:28:53.156 23:10:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:53.156 23:10:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:53.156 23:10:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.156 23:10:33 -- common/autotest_common.sh@10 -- # set +x 00:28:53.156 ************************************ 00:28:53.156 START TEST thread 00:28:53.156 ************************************ 00:28:53.156 23:10:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:53.156 * Looking for test storage... 00:28:53.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:53.156 23:10:33 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.156 23:10:33 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.156 23:10:33 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.156 23:10:33 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.156 23:10:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.156 23:10:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.156 23:10:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.156 23:10:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.156 23:10:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.156 23:10:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.156 23:10:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.156 23:10:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.156 23:10:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.156 23:10:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.156 23:10:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.156 23:10:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:53.156 23:10:33 thread -- scripts/common.sh@345 -- # : 1 00:28:53.156 23:10:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.156 23:10:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.156 23:10:33 thread -- scripts/common.sh@365 -- # decimal 1 00:28:53.156 23:10:33 thread -- scripts/common.sh@353 -- # local d=1 00:28:53.156 23:10:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.156 23:10:33 thread -- scripts/common.sh@355 -- # echo 1 00:28:53.417 23:10:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.417 23:10:33 thread -- scripts/common.sh@366 -- # decimal 2 00:28:53.417 23:10:33 thread -- scripts/common.sh@353 -- # local d=2 00:28:53.417 23:10:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.417 23:10:33 thread -- scripts/common.sh@355 -- # echo 2 00:28:53.417 23:10:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.417 23:10:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.417 23:10:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.417 23:10:33 thread -- scripts/common.sh@368 -- # return 0 00:28:53.417 23:10:33 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.417 23:10:33 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.417 --rc genhtml_branch_coverage=1 00:28:53.417 --rc genhtml_function_coverage=1 00:28:53.417 --rc genhtml_legend=1 00:28:53.417 --rc geninfo_all_blocks=1 00:28:53.417 --rc geninfo_unexecuted_blocks=1 00:28:53.417 00:28:53.417 ' 00:28:53.417 23:10:33 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.417 --rc genhtml_branch_coverage=1 00:28:53.417 --rc genhtml_function_coverage=1 00:28:53.417 --rc genhtml_legend=1 00:28:53.417 --rc geninfo_all_blocks=1 00:28:53.417 --rc geninfo_unexecuted_blocks=1 00:28:53.417 00:28:53.417 ' 00:28:53.417 23:10:33 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.417 --rc genhtml_branch_coverage=1 00:28:53.417 --rc genhtml_function_coverage=1 00:28:53.417 --rc genhtml_legend=1 00:28:53.417 --rc geninfo_all_blocks=1 00:28:53.417 --rc geninfo_unexecuted_blocks=1 00:28:53.417 00:28:53.417 ' 00:28:53.417 23:10:33 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.417 --rc genhtml_branch_coverage=1 00:28:53.417 --rc genhtml_function_coverage=1 00:28:53.417 --rc genhtml_legend=1 00:28:53.417 --rc geninfo_all_blocks=1 00:28:53.417 --rc geninfo_unexecuted_blocks=1 00:28:53.417 00:28:53.417 ' 00:28:53.417 23:10:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:53.418 23:10:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:28:53.418 23:10:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.418 23:10:33 thread -- common/autotest_common.sh@10 -- # set +x 00:28:53.418 ************************************ 00:28:53.418 START TEST thread_poller_perf 00:28:53.418 ************************************ 00:28:53.418 23:10:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:53.418 [2024-12-09 23:10:33.829265] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:53.418 [2024-12-09 23:10:33.829512] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59540 ] 00:28:53.418 [2024-12-09 23:10:33.984970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.679 [2024-12-09 23:10:34.084629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.679 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:28:54.624 [2024-12-09T23:10:35.260Z] ====================================== 00:28:54.624 [2024-12-09T23:10:35.260Z] busy:2610415760 (cyc) 00:28:54.624 [2024-12-09T23:10:35.260Z] total_run_count: 305000 00:28:54.624 [2024-12-09T23:10:35.260Z] tsc_hz: 2600000000 (cyc) 00:28:54.624 [2024-12-09T23:10:35.260Z] ====================================== 00:28:54.624 [2024-12-09T23:10:35.260Z] poller_cost: 8558 (cyc), 3291 (nsec) 00:28:54.624 00:28:54.624 real 0m1.445s 00:28:54.624 user 0m1.271s 00:28:54.624 sys 0m0.066s 00:28:54.624 23:10:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.624 ************************************ 00:28:54.624 END TEST thread_poller_perf 00:28:54.624 ************************************ 00:28:54.624 23:10:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:54.884 23:10:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:54.884 23:10:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:28:54.884 23:10:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.884 23:10:35 thread -- common/autotest_common.sh@10 -- # set +x 00:28:54.884 ************************************ 00:28:54.884 START TEST thread_poller_perf 00:28:54.884 ************************************ 00:28:54.884 23:10:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:54.884 [2024-12-09 23:10:35.320655] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:54.884 [2024-12-09 23:10:35.320905] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:28:54.884 [2024-12-09 23:10:35.479402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.145 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:28:55.145 [2024-12-09 23:10:35.582073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.531 [2024-12-09T23:10:37.167Z] ====================================== 00:28:56.531 [2024-12-09T23:10:37.167Z] busy:2603032350 (cyc) 00:28:56.531 [2024-12-09T23:10:37.167Z] total_run_count: 3641000 00:28:56.531 [2024-12-09T23:10:37.167Z] tsc_hz: 2600000000 (cyc) 00:28:56.531 [2024-12-09T23:10:37.167Z] ====================================== 00:28:56.531 [2024-12-09T23:10:37.167Z] poller_cost: 714 (cyc), 274 (nsec) 00:28:56.531 ************************************ 00:28:56.531 END TEST thread_poller_perf 00:28:56.531 ************************************ 00:28:56.531 00:28:56.531 real 0m1.447s 00:28:56.531 user 0m1.281s 00:28:56.531 sys 0m0.059s 00:28:56.531 23:10:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.531 23:10:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:56.531 23:10:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:28:56.531 00:28:56.531 real 0m3.112s 00:28:56.531 user 0m2.660s 00:28:56.531 sys 0m0.235s 00:28:56.531 ************************************ 00:28:56.531 END TEST thread 00:28:56.531 ************************************ 00:28:56.531 23:10:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.531 23:10:36 thread -- common/autotest_common.sh@10 -- # set +x 00:28:56.531 23:10:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:28:56.531 23:10:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:56.531 23:10:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.531 23:10:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.531 23:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.531 ************************************ 00:28:56.531 START TEST app_cmdline 00:28:56.531 ************************************ 00:28:56.531 23:10:36 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:56.531 * Looking for test storage... 00:28:56.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:56.531 23:10:36 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.531 23:10:36 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.531 23:10:36 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.531 23:10:36 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.531 23:10:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:28:56.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.532 23:10:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.532 --rc genhtml_branch_coverage=1 00:28:56.532 --rc genhtml_function_coverage=1 00:28:56.532 --rc genhtml_legend=1 00:28:56.532 --rc geninfo_all_blocks=1 00:28:56.532 --rc geninfo_unexecuted_blocks=1 00:28:56.532 00:28:56.532 ' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.532 --rc genhtml_branch_coverage=1 00:28:56.532 --rc genhtml_function_coverage=1 00:28:56.532 --rc genhtml_legend=1 00:28:56.532 --rc geninfo_all_blocks=1 00:28:56.532 --rc geninfo_unexecuted_blocks=1 00:28:56.532 00:28:56.532 ' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.532 --rc genhtml_branch_coverage=1 00:28:56.532 --rc genhtml_function_coverage=1 00:28:56.532 --rc genhtml_legend=1 00:28:56.532 --rc geninfo_all_blocks=1 00:28:56.532 --rc geninfo_unexecuted_blocks=1 00:28:56.532 00:28:56.532 ' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.532 --rc genhtml_branch_coverage=1 00:28:56.532 --rc genhtml_function_coverage=1 00:28:56.532 --rc genhtml_legend=1 00:28:56.532 --rc geninfo_all_blocks=1 00:28:56.532 --rc geninfo_unexecuted_blocks=1 00:28:56.532 00:28:56.532 ' 00:28:56.532 23:10:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:28:56.532 23:10:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59660 00:28:56.532 23:10:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59660 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.532 23:10:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.532 23:10:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:56.532 [2024-12-09 23:10:37.022719] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:56.532 [2024-12-09 23:10:37.022844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:28:56.795 [2024-12-09 23:10:37.181208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.795 [2024-12-09 23:10:37.280519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.370 23:10:37 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.370 23:10:37 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:28:57.370 23:10:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:28:57.643 { 00:28:57.643 "version": "SPDK v25.01-pre git sha1 c12cb8fe3", 00:28:57.643 "fields": { 00:28:57.643 "major": 25, 00:28:57.643 "minor": 1, 00:28:57.643 "patch": 0, 00:28:57.643 "suffix": "-pre", 00:28:57.643 "commit": "c12cb8fe3" 00:28:57.643 } 00:28:57.643 } 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:28:57.643 23:10:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.643 23:10:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:57.643 23:10:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:28:57.643 23:10:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:57.644 23:10:38 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:57.905 request: 00:28:57.905 { 00:28:57.905 "method": "env_dpdk_get_mem_stats", 00:28:57.905 "req_id": 1 00:28:57.905 } 00:28:57.905 Got JSON-RPC error response 00:28:57.905 response: 00:28:57.905 { 00:28:57.905 "code": -32601, 00:28:57.905 "message": "Method not found" 00:28:57.905 } 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.905 23:10:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59660 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59660 ']' 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59660 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59660 00:28:57.905 killing process with pid 59660 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59660' 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@973 -- # kill 59660 00:28:57.905 23:10:38 app_cmdline -- common/autotest_common.sh@978 -- # wait 59660 00:28:59.297 ************************************ 00:28:59.297 END TEST app_cmdline 00:28:59.297 ************************************ 00:28:59.297 00:28:59.297 real 0m3.024s 00:28:59.297 user 0m3.348s 00:28:59.297 sys 0m0.415s 00:28:59.297 23:10:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.297 23:10:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:59.297 23:10:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:59.297 23:10:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.297 23:10:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.297 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:59.297 ************************************ 00:28:59.297 START TEST version 00:28:59.297 ************************************ 00:28:59.297 23:10:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:59.297 * Looking for test storage... 00:28:59.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:59.297 23:10:39 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.297 23:10:39 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.297 23:10:39 version -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.559 23:10:39 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.559 23:10:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.559 23:10:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.559 23:10:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.559 23:10:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.559 23:10:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.559 23:10:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.560 23:10:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.560 23:10:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.560 23:10:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.560 23:10:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.560 23:10:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.560 23:10:39 version -- scripts/common.sh@344 -- # case "$op" in 00:28:59.560 23:10:39 version -- scripts/common.sh@345 -- # : 1 00:28:59.560 23:10:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.560 23:10:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.560 23:10:39 version -- scripts/common.sh@365 -- # decimal 1 00:28:59.560 23:10:39 version -- scripts/common.sh@353 -- # local d=1 00:28:59.560 23:10:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.560 23:10:39 version -- scripts/common.sh@355 -- # echo 1 00:28:59.560 23:10:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.560 23:10:39 version -- scripts/common.sh@366 -- # decimal 2 00:28:59.560 23:10:39 version -- scripts/common.sh@353 -- # local d=2 00:28:59.560 23:10:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.560 23:10:39 version -- scripts/common.sh@355 -- # echo 2 00:28:59.560 23:10:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.560 23:10:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.560 23:10:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.560 23:10:39 version -- scripts/common.sh@368 -- # return 0 00:28:59.560 23:10:39 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.560 23:10:39 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.560 --rc genhtml_branch_coverage=1 00:28:59.560 --rc genhtml_function_coverage=1 00:28:59.560 --rc genhtml_legend=1 00:28:59.560 --rc geninfo_all_blocks=1 00:28:59.560 --rc geninfo_unexecuted_blocks=1 00:28:59.560 00:28:59.560 ' 00:28:59.560 23:10:39 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.560 --rc genhtml_branch_coverage=1 00:28:59.560 --rc genhtml_function_coverage=1 00:28:59.560 --rc genhtml_legend=1 00:28:59.560 --rc geninfo_all_blocks=1 00:28:59.560 --rc geninfo_unexecuted_blocks=1 00:28:59.560 00:28:59.560 ' 00:28:59.560 23:10:39 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.560 --rc genhtml_branch_coverage=1 00:28:59.560 --rc genhtml_function_coverage=1 00:28:59.560 --rc genhtml_legend=1 00:28:59.560 --rc geninfo_all_blocks=1 00:28:59.560 --rc geninfo_unexecuted_blocks=1 00:28:59.560 00:28:59.560 ' 00:28:59.560 23:10:39 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.560 --rc genhtml_branch_coverage=1 00:28:59.560 --rc genhtml_function_coverage=1 00:28:59.560 --rc genhtml_legend=1 00:28:59.560 --rc geninfo_all_blocks=1 00:28:59.560 --rc geninfo_unexecuted_blocks=1 00:28:59.560 00:28:59.560 ' 00:28:59.560 23:10:39 version -- app/version.sh@17 -- # get_header_version major 00:28:59.560 23:10:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # cut -f2 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # tr -d '"' 00:28:59.560 23:10:40 version -- app/version.sh@17 -- # major=25 00:28:59.560 23:10:40 version -- app/version.sh@18 -- # get_header_version minor 00:28:59.560 23:10:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # cut -f2 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # tr -d '"' 00:28:59.560 23:10:40 version -- app/version.sh@18 -- # minor=1 00:28:59.560 23:10:40 version -- app/version.sh@19 -- # get_header_version patch 00:28:59.560 23:10:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # cut -f2 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # tr -d '"' 00:28:59.560 23:10:40 version -- app/version.sh@19 -- # patch=0 00:28:59.560 23:10:40 version -- app/version.sh@20 -- # get_header_version suffix 00:28:59.560 23:10:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # cut -f2 00:28:59.560 23:10:40 version -- app/version.sh@14 -- # tr -d '"' 00:28:59.560 23:10:40 version -- app/version.sh@20 -- # suffix=-pre 00:28:59.560 23:10:40 version -- app/version.sh@22 -- # version=25.1 00:28:59.560 23:10:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:28:59.560 23:10:40 version -- app/version.sh@28 -- # version=25.1rc0 00:28:59.560 23:10:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:59.560 23:10:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:28:59.560 23:10:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:28:59.560 23:10:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:28:59.560 00:28:59.560 real 0m0.191s 00:28:59.560 user 0m0.131s 00:28:59.560 sys 0m0.085s 00:28:59.560 23:10:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.560 23:10:40 version -- common/autotest_common.sh@10 -- # set +x 00:28:59.560 ************************************ 00:28:59.560 END TEST version 00:28:59.560 ************************************ 00:28:59.560 23:10:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:28:59.560 23:10:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:28:59.560 23:10:40 -- spdk/autotest.sh@194 -- # uname -s 00:28:59.560 23:10:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:59.560 23:10:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:59.560 23:10:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:59.560 23:10:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:59.560 23:10:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:59.560 23:10:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.560 23:10:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.560 23:10:40 -- common/autotest_common.sh@10 -- # set +x 00:28:59.560 ************************************ 00:28:59.560 START TEST blockdev_nvme 00:28:59.560 ************************************ 00:28:59.560 23:10:40 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:59.560 * Looking for test storage... 00:28:59.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:59.560 23:10:40 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.560 23:10:40 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.560 23:10:40 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.822 23:10:40 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.822 23:10:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.823 23:10:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:28:59.823 23:10:40 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.823 23:10:40 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.823 --rc genhtml_branch_coverage=1 00:28:59.823 --rc genhtml_function_coverage=1 00:28:59.823 --rc genhtml_legend=1 00:28:59.823 --rc geninfo_all_blocks=1 00:28:59.823 --rc geninfo_unexecuted_blocks=1 00:28:59.823 00:28:59.823 ' 00:28:59.823 23:10:40 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.823 --rc genhtml_branch_coverage=1 00:28:59.823 --rc genhtml_function_coverage=1 00:28:59.823 --rc genhtml_legend=1 00:28:59.823 --rc geninfo_all_blocks=1 00:28:59.823 --rc geninfo_unexecuted_blocks=1 00:28:59.823 00:28:59.823 ' 00:28:59.823 23:10:40 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.823 --rc genhtml_branch_coverage=1 00:28:59.823 --rc genhtml_function_coverage=1 00:28:59.823 --rc genhtml_legend=1 00:28:59.823 --rc geninfo_all_blocks=1 00:28:59.823 --rc geninfo_unexecuted_blocks=1 00:28:59.823 00:28:59.823 ' 00:28:59.823 23:10:40 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.823 --rc genhtml_branch_coverage=1 00:28:59.823 --rc genhtml_function_coverage=1 00:28:59.823 --rc genhtml_legend=1 00:28:59.823 --rc geninfo_all_blocks=1 00:28:59.823 --rc geninfo_unexecuted_blocks=1 00:28:59.823 00:28:59.823 ' 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:59.823 23:10:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:28:59.823 23:10:40 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:28:59.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59832 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59832 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.824 23:10:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:59.824 23:10:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:59.824 [2024-12-09 23:10:40.327655] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:59.824 [2024-12-09 23:10:40.327773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:29:00.086 [2024-12-09 23:10:40.483224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.086 [2024-12-09 23:10:40.583963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.658 23:10:41 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.658 23:10:41 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:00.658 23:10:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:00.658 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.658 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.919 23:10:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.919 23:10:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:29:00.919 23:10:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.919 23:10:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.919 23:10:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.919 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.181 23:10:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:29:01.181 23:10:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:29:01.181 23:10:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.181 23:10:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.181 23:10:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:29:01.181 23:10:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.181 23:10:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:29:01.181 23:10:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:29:01.182 23:10:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4ac65ef7-5d58-4e1b-8cfe-038adcdb8fb6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4ac65ef7-5d58-4e1b-8cfe-038adcdb8fb6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ad7ed182-3ee9-4063-b411-06c661f79856"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ad7ed182-3ee9-4063-b411-06c661f79856",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "53a45028-d813-4233-8e62-6cc8a4986f02"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53a45028-d813-4233-8e62-6cc8a4986f02",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "183f95cd-ce4e-40ba-a13e-5a99400e62c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "183f95cd-ce4e-40ba-a13e-5a99400e62c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8364e0cf-6fd8-4366-a0a0-7961a58cc036"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8364e0cf-6fd8-4366-a0a0-7961a58cc036",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bd4b6589-b700-4fca-bc62-a3d54fce585a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bd4b6589-b700-4fca-bc62-a3d54fce585a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:01.182 23:10:41 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:29:01.182 23:10:41 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:29:01.182 23:10:41 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:29:01.182 23:10:41 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59832 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59832 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59832 00:29:01.182 killing process with pid 59832 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59832' 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59832 00:29:01.182 23:10:41 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59832 00:29:02.569 23:10:43 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:02.569 23:10:43 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:02.569 23:10:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:02.569 23:10:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.569 23:10:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.569 ************************************ 00:29:02.569 START TEST bdev_hello_world 00:29:02.569 ************************************ 00:29:02.569 23:10:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:02.829 [2024-12-09 23:10:43.218705] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:02.829 [2024-12-09 23:10:43.218830] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:29:02.829 [2024-12-09 23:10:43.376689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.090 [2024-12-09 23:10:43.478414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.663 [2024-12-09 23:10:44.024912] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:03.663 [2024-12-09 23:10:44.024957] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:03.663 [2024-12-09 23:10:44.024977] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:03.663 [2024-12-09 23:10:44.027429] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:03.663 [2024-12-09 23:10:44.027725] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:03.663 [2024-12-09 23:10:44.027755] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:03.663 [2024-12-09 23:10:44.028101] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:03.663 00:29:03.663 [2024-12-09 23:10:44.028120] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:04.234 00:29:04.234 real 0m1.596s 00:29:04.234 user 0m1.328s 00:29:04.234 sys 0m0.160s 00:29:04.234 23:10:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.234 ************************************ 00:29:04.234 END TEST bdev_hello_world 00:29:04.234 ************************************ 00:29:04.234 23:10:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 23:10:44 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:29:04.234 23:10:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:04.234 23:10:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.234 23:10:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 ************************************ 00:29:04.234 START TEST bdev_bounds 00:29:04.234 ************************************ 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:29:04.234 Process bdevio pid: 59953 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59953 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59953' 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59953 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59953 ']' 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:04.234 23:10:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 [2024-12-09 23:10:44.854013] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:04.234 [2024-12-09 23:10:44.854134] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:29:04.496 [2024-12-09 23:10:45.012893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:04.496 [2024-12-09 23:10:45.118884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.496 [2024-12-09 23:10:45.118940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.496 [2024-12-09 23:10:45.118932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.441 23:10:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.441 23:10:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:29:05.441 23:10:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:05.441 I/O targets: 00:29:05.441 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:05.441 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:05.441 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:05.441 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:05.441 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:05.441 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:05.441 00:29:05.441 00:29:05.441 CUnit - A unit testing framework for C - Version 2.1-3 00:29:05.441 http://cunit.sourceforge.net/ 00:29:05.441 00:29:05.441 00:29:05.441 Suite: bdevio tests on: Nvme3n1 00:29:05.441 Test: blockdev write read block ...passed 00:29:05.441 Test: blockdev write zeroes read block ...passed 00:29:05.441 Test: blockdev write zeroes read no split ...passed 00:29:05.441 Test: blockdev write zeroes read split ...passed 00:29:05.441 Test: blockdev write zeroes read split partial ...passed 00:29:05.441 Test: blockdev reset ...[2024-12-09 23:10:45.869428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:05.441 passed 00:29:05.441 Test: blockdev write read 8 blocks ...[2024-12-09 23:10:45.872191] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:05.441 passed 00:29:05.441 Test: blockdev write read size > 128k ...passed 00:29:05.441 Test: blockdev write read invalid size ...passed 00:29:05.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.441 Test: blockdev write read max offset ...passed 00:29:05.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.441 Test: blockdev writev readv 8 blocks ...passed 00:29:05.441 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.441 Test: blockdev writev readv block ...passed 00:29:05.441 Test: blockdev writev readv size > 128k ...passed 00:29:05.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.441 Test: blockdev comparev and writev ...[2024-12-09 23:10:45.879416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b840a000 len:0x1000 00:29:05.441 [2024-12-09 23:10:45.879464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.441 passed 00:29:05.441 Test: blockdev nvme passthru rw ...passed 00:29:05.441 Test: blockdev nvme passthru vendor specific ...passed 00:29:05.441 Test: blockdev nvme admin passthru ...[2024-12-09 23:10:45.880134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.441 [2024-12-09 23:10:45.880162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:05.441 passed 00:29:05.441 Test: blockdev copy ...passed 00:29:05.441 Suite: bdevio tests on: Nvme2n3 00:29:05.441 Test: blockdev write read block ...passed 00:29:05.441 Test: blockdev write zeroes read block ...passed 00:29:05.441 Test: blockdev write zeroes read no split ...passed 00:29:05.441 Test: blockdev write zeroes read split ...passed 00:29:05.441 Test: blockdev write zeroes read split partial ...passed 00:29:05.441 Test: blockdev reset ...[2024-12-09 23:10:45.939952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:05.441 [2024-12-09 23:10:45.945401] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:05.441 passed 00:29:05.441 Test: blockdev write read 8 blocks ...passed 00:29:05.441 Test: blockdev write read size > 128k ...passed 00:29:05.441 Test: blockdev write read invalid size ...passed 00:29:05.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.441 Test: blockdev write read max offset ...passed 00:29:05.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.441 Test: blockdev writev readv 8 blocks ...passed 00:29:05.441 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.441 Test: blockdev writev readv block ...passed 00:29:05.441 Test: blockdev writev readv size > 128k ...passed 00:29:05.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.441 Test: blockdev comparev and writev ...[2024-12-09 23:10:45.954229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:29:05.441 Test: blockdev nvme passthru rw ...passed 00:29:05.441 Test: blockdev nvme passthru vendor specific ...passed 00:29:05.441 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2bc806000 len:0x1000 00:29:05.441 [2024-12-09 23:10:45.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.441 [2024-12-09 23:10:45.954816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.441 [2024-12-09 23:10:45.954841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:05.441 passed 00:29:05.441 Test: blockdev copy ...passed 00:29:05.441 Suite: bdevio tests on: Nvme2n2 00:29:05.441 Test: blockdev write read block ...passed 00:29:05.441 Test: blockdev write zeroes read block ...passed 00:29:05.441 Test: blockdev write zeroes read no split ...passed 00:29:05.441 Test: blockdev write zeroes read split ...passed 00:29:05.441 Test: blockdev write zeroes read split partial ...passed 00:29:05.441 Test: blockdev reset ...[2024-12-09 23:10:46.006851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:05.441 [2024-12-09 23:10:46.010085] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:05.441 passed 00:29:05.441 Test: blockdev write read 8 blocks ...passed 00:29:05.441 Test: blockdev write read size > 128k ...passed 00:29:05.441 Test: blockdev write read invalid size ...passed 00:29:05.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.441 Test: blockdev write read max offset ...passed 00:29:05.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.441 Test: blockdev writev readv 8 blocks ...passed 00:29:05.441 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.441 Test: blockdev writev readv block ...passed 00:29:05.441 Test: blockdev writev readv size > 128k ...passed 00:29:05.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.441 Test: blockdev comparev and writev ...[2024-12-09 23:10:46.018084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d883c000 len:0x1000 00:29:05.441 [2024-12-09 23:10:46.018216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.441 passed 00:29:05.442 Test: blockdev nvme passthru rw ...passed 00:29:05.442 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:10:46.018995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.442 [2024-12-09 23:10:46.019101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:29:05.442 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:29:05.442 passed 00:29:05.442 Test: blockdev copy ...passed 00:29:05.442 Suite: bdevio tests on: Nvme2n1 00:29:05.442 Test: blockdev write read block ...passed 00:29:05.442 Test: blockdev write zeroes read block ...passed 00:29:05.442 Test: blockdev write zeroes read no split ...passed 00:29:05.442 Test: blockdev write zeroes read split ...passed 00:29:05.714 Test: blockdev write zeroes read split partial ...passed 00:29:05.714 Test: blockdev reset ...[2024-12-09 23:10:46.076631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:05.714 [2024-12-09 23:10:46.079690] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:29:05.714 Test: blockdev write read 8 blocks ...passed 00:29:05.714 Test: blockdev write read size > 128k ...uccessful. 00:29:05.714 passed 00:29:05.714 Test: blockdev write read invalid size ...passed 00:29:05.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.714 Test: blockdev write read max offset ...passed 00:29:05.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.714 Test: blockdev writev readv 8 blocks ...passed 00:29:05.714 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.714 Test: blockdev writev readv block ...passed 00:29:05.714 Test: blockdev writev readv size > 128k ...passed 00:29:05.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.714 Test: blockdev comparev and writev ...[2024-12-09 23:10:46.086628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8838000 len:0x1000 00:29:05.714 [2024-12-09 23:10:46.086673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.714 passed 00:29:05.714 Test: blockdev nvme passthru rw ...passed 00:29:05.714 Test: blockdev nvme passthru vendor specific ...passed 00:29:05.714 Test: blockdev nvme admin passthru ...[2024-12-09 23:10:46.087378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.714 [2024-12-09 23:10:46.087401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:05.714 passed 00:29:05.714 Test: blockdev copy ...passed 00:29:05.714 Suite: bdevio tests on: Nvme1n1 00:29:05.714 Test: blockdev write read block ...passed 00:29:05.714 Test: blockdev write zeroes read block ...passed 00:29:05.714 Test: blockdev write zeroes read no split ...passed 00:29:05.714 Test: blockdev write zeroes read split ...passed 00:29:05.714 Test: blockdev write zeroes read split partial ...passed 00:29:05.714 Test: blockdev reset ...[2024-12-09 23:10:46.156898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:05.714 [2024-12-09 23:10:46.159639] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:29:05.714 00:29:05.714 Test: blockdev write read 8 blocks ...passed 00:29:05.714 Test: blockdev write read size > 128k ...passed 00:29:05.714 Test: blockdev write read invalid size ...passed 00:29:05.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.714 Test: blockdev write read max offset ...passed 00:29:05.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.714 Test: blockdev writev readv 8 blocks ...passed 00:29:05.714 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.714 Test: blockdev writev readv block ...passed 00:29:05.714 Test: blockdev writev readv size > 128k ...passed 00:29:05.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.714 Test: blockdev comparev and writev ...[2024-12-09 23:10:46.169301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8834000 len:0x1000 00:29:05.714 [2024-12-09 23:10:46.169343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.714 passed 00:29:05.714 Test: blockdev nvme passthru rw ...passed 00:29:05.714 Test: blockdev nvme passthru vendor specific ...passed 00:29:05.714 Test: blockdev nvme admin passthru ...[2024-12-09 23:10:46.169896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.714 [2024-12-09 23:10:46.169920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:05.714 passed 00:29:05.714 Test: blockdev copy ...passed 00:29:05.714 Suite: bdevio tests on: Nvme0n1 00:29:05.714 Test: blockdev write read block ...passed 00:29:05.714 Test: blockdev write zeroes read block ...passed 00:29:05.714 Test: blockdev write zeroes read no split ...passed 00:29:05.714 Test: blockdev write zeroes read split ...passed 00:29:05.714 Test: blockdev write zeroes read split partial ...passed 00:29:05.714 Test: blockdev reset ...[2024-12-09 23:10:46.245746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:05.714 [2024-12-09 23:10:46.248576] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:29:05.714 00:29:05.714 Test: blockdev write read 8 blocks ...passed 00:29:05.714 Test: blockdev write read size > 128k ...passed 00:29:05.714 Test: blockdev write read invalid size ...passed 00:29:05.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.714 Test: blockdev write read max offset ...passed 00:29:05.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.714 Test: blockdev writev readv 8 blocks ...passed 00:29:05.714 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.714 Test: blockdev writev readv block ...passed 00:29:05.714 Test: blockdev writev readv size > 128k ...passed 00:29:05.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.714 Test: blockdev comparev and writev ...passed 00:29:05.714 Test: blockdev nvme passthru rw ...[2024-12-09 23:10:46.256596] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:05.714 separate metadata which is not supported yet. 00:29:05.714 passed 00:29:05.714 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:10:46.257153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:05.714 [2024-12-09 23:10:46.257192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:05.714 passed 00:29:05.714 Test: blockdev nvme admin passthru ...passed 00:29:05.714 Test: blockdev copy ...passed 00:29:05.714 00:29:05.714 Run Summary: Type Total Ran Passed Failed Inactive 00:29:05.714 suites 6 6 n/a 0 0 00:29:05.714 tests 138 138 138 0 0 00:29:05.714 asserts 893 893 893 0 n/a 00:29:05.714 00:29:05.714 Elapsed time = 1.164 seconds 00:29:05.714 0 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59953 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59953 ']' 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59953 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.714 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59953 00:29:05.714 killing process with pid 59953 00:29:05.715 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.715 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.715 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59953' 00:29:05.715 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59953 00:29:05.715 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59953 00:29:06.658 23:10:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:06.658 00:29:06.658 real 0m2.196s 00:29:06.658 user 0m5.627s 00:29:06.658 sys 0m0.293s 00:29:06.658 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.658 23:10:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 ************************************ 00:29:06.658 END TEST bdev_bounds 00:29:06.658 ************************************ 00:29:06.658 23:10:47 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:06.658 23:10:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:06.658 23:10:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.658 23:10:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 ************************************ 00:29:06.658 START TEST bdev_nbd 00:29:06.658 ************************************ 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60007 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60007 /var/tmp/spdk-nbd.sock 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60007 ']' 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:06.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.658 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 [2024-12-09 23:10:47.095975] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:06.659 [2024-12-09 23:10:47.096246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.659 [2024-12-09 23:10:47.253939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.920 [2024-12-09 23:10:47.357123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:07.491 23:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:07.753 1+0 records in 00:29:07.753 1+0 records out 00:29:07.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366948 s, 11.2 MB/s 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:07.753 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.015 1+0 records in 00:29:08.015 1+0 records out 00:29:08.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578487 s, 7.1 MB/s 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:08.015 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.275 1+0 records in 00:29:08.275 1+0 records out 00:29:08.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419766 s, 9.8 MB/s 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.275 1+0 records in 00:29:08.275 1+0 records out 00:29:08.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053743 s, 7.6 MB/s 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:08.275 23:10:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.536 1+0 records in 00:29:08.536 1+0 records out 00:29:08.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047493 s, 8.6 MB/s 00:29:08.536 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:08.537 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.802 1+0 records in 00:29:08.802 1+0 records out 00:29:08.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309202 s, 13.2 MB/s 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:08.802 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:09.062 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd0", 00:29:09.062 "bdev_name": "Nvme0n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd1", 00:29:09.062 "bdev_name": "Nvme1n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd2", 00:29:09.062 "bdev_name": "Nvme2n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd3", 00:29:09.062 "bdev_name": "Nvme2n2" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd4", 00:29:09.062 "bdev_name": "Nvme2n3" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd5", 00:29:09.062 "bdev_name": "Nvme3n1" 00:29:09.062 } 00:29:09.062 ]' 00:29:09.062 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:09.062 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd0", 00:29:09.062 "bdev_name": "Nvme0n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd1", 00:29:09.062 "bdev_name": "Nvme1n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd2", 00:29:09.062 "bdev_name": "Nvme2n1" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd3", 00:29:09.062 "bdev_name": "Nvme2n2" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd4", 00:29:09.062 "bdev_name": "Nvme2n3" 00:29:09.062 }, 00:29:09.062 { 00:29:09.062 "nbd_device": "/dev/nbd5", 00:29:09.062 "bdev_name": "Nvme3n1" 00:29:09.062 } 00:29:09.063 ]' 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.063 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.324 23:10:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.584 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.585 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.845 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.105 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.366 23:10:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:10.627 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:10.888 /dev/nbd0 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:10.888 1+0 records in 00:29:10.888 1+0 records out 00:29:10.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279482 s, 14.7 MB/s 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:10.888 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:29:11.150 /dev/nbd1 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.150 1+0 records in 00:29:11.150 1+0 records out 00:29:11.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349539 s, 11.7 MB/s 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:11.150 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:29:11.413 /dev/nbd10 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.413 1+0 records in 00:29:11.413 1+0 records out 00:29:11.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339714 s, 12.1 MB/s 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:11.413 23:10:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:29:11.675 /dev/nbd11 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.675 1+0 records in 00:29:11.675 1+0 records out 00:29:11.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405532 s, 10.1 MB/s 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:11.675 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:29:11.936 /dev/nbd12 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:11.936 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.937 1+0 records in 00:29:11.937 1+0 records out 00:29:11.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441477 s, 9.3 MB/s 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:11.937 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:29:12.201 /dev/nbd13 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:12.201 1+0 records in 00:29:12.201 1+0 records out 00:29:12.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477603 s, 8.6 MB/s 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:12.201 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd0", 00:29:12.462 "bdev_name": "Nvme0n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd1", 00:29:12.462 "bdev_name": "Nvme1n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd10", 00:29:12.462 "bdev_name": "Nvme2n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd11", 00:29:12.462 "bdev_name": "Nvme2n2" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd12", 00:29:12.462 "bdev_name": "Nvme2n3" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd13", 00:29:12.462 "bdev_name": "Nvme3n1" 00:29:12.462 } 00:29:12.462 ]' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd0", 00:29:12.462 "bdev_name": "Nvme0n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd1", 00:29:12.462 "bdev_name": "Nvme1n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd10", 00:29:12.462 "bdev_name": "Nvme2n1" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd11", 00:29:12.462 "bdev_name": "Nvme2n2" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd12", 00:29:12.462 "bdev_name": "Nvme2n3" 00:29:12.462 }, 00:29:12.462 { 00:29:12.462 "nbd_device": "/dev/nbd13", 00:29:12.462 "bdev_name": "Nvme3n1" 00:29:12.462 } 00:29:12.462 ]' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:12.462 /dev/nbd1 00:29:12.462 /dev/nbd10 00:29:12.462 /dev/nbd11 00:29:12.462 /dev/nbd12 00:29:12.462 /dev/nbd13' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:12.462 /dev/nbd1 00:29:12.462 /dev/nbd10 00:29:12.462 /dev/nbd11 00:29:12.462 /dev/nbd12 00:29:12.462 /dev/nbd13' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:29:12.462 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:12.463 256+0 records in 00:29:12.463 256+0 records out 00:29:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725076 s, 145 MB/s 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:12.463 256+0 records in 00:29:12.463 256+0 records out 00:29:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0630606 s, 16.6 MB/s 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.463 23:10:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:12.463 256+0 records in 00:29:12.463 256+0 records out 00:29:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0747375 s, 14.0 MB/s 00:29:12.463 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.463 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:12.724 256+0 records in 00:29:12.724 256+0 records out 00:29:12.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0785675 s, 13.3 MB/s 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:12.724 256+0 records in 00:29:12.724 256+0 records out 00:29:12.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0684485 s, 15.3 MB/s 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:12.724 256+0 records in 00:29:12.724 256+0 records out 00:29:12.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0679183 s, 15.4 MB/s 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:12.724 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:12.987 256+0 records in 00:29:12.987 256+0 records out 00:29:12.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0686269 s, 15.3 MB/s 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.987 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:12.988 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:13.250 23:10:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:13.511 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:13.772 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:14.030 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:14.291 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:14.292 23:10:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:14.553 malloc_lvol_verify 00:29:14.553 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:14.816 487ba9e0-3b9f-4fcc-afe6-d9dd339e9341 00:29:14.816 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:15.076 1e2480ef-0595-4356-b082-54ad06c62d06 00:29:15.076 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:15.338 /dev/nbd0 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:15.338 mke2fs 1.47.0 (5-Feb-2023) 00:29:15.338 Discarding device blocks: 0/4096 done 00:29:15.338 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:15.338 00:29:15.338 Allocating group tables: 0/1 done 00:29:15.338 Writing inode tables: 0/1 done 00:29:15.338 Creating journal (1024 blocks): done 00:29:15.338 Writing superblocks and filesystem accounting information: 0/1 done 00:29:15.338 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:15.338 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:15.339 23:10:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60007 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60007 ']' 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60007 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60007 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.600 killing process with pid 60007 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60007' 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60007 00:29:15.600 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60007 00:29:16.542 23:10:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:16.542 00:29:16.542 real 0m9.829s 00:29:16.542 user 0m14.215s 00:29:16.542 sys 0m3.051s 00:29:16.542 ************************************ 00:29:16.542 END TEST bdev_nbd 00:29:16.542 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.542 23:10:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:16.542 ************************************ 00:29:16.542 23:10:56 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:29:16.542 23:10:56 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:29:16.542 skipping fio tests on NVMe due to multi-ns failures. 00:29:16.542 23:10:56 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:16.542 23:10:56 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:16.542 23:10:56 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:16.542 23:10:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:16.542 23:10:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.542 23:10:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:16.542 ************************************ 00:29:16.542 START TEST bdev_verify 00:29:16.542 ************************************ 00:29:16.542 23:10:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:16.542 [2024-12-09 23:10:56.969344] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:16.542 [2024-12-09 23:10:56.969468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:29:16.542 [2024-12-09 23:10:57.127553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:16.804 [2024-12-09 23:10:57.233263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.804 [2024-12-09 23:10:57.233269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.378 Running I/O for 5 seconds... 00:29:19.710 23552.00 IOPS, 92.00 MiB/s [2024-12-09T23:11:01.288Z] 22720.00 IOPS, 88.75 MiB/s [2024-12-09T23:11:02.230Z] 22378.67 IOPS, 87.42 MiB/s [2024-12-09T23:11:03.175Z] 21360.00 IOPS, 83.44 MiB/s [2024-12-09T23:11:03.175Z] 20902.40 IOPS, 81.65 MiB/s 00:29:22.539 Latency(us) 00:29:22.539 [2024-12-09T23:11:03.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.539 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0xbd0bd 00:29:22.539 Nvme0n1 : 5.04 1702.01 6.65 0.00 0.00 74881.66 12603.08 86305.87 00:29:22.539 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:22.539 Nvme0n1 : 5.04 1727.50 6.75 0.00 0.00 73782.38 12855.14 77836.60 00:29:22.539 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0xa0000 00:29:22.539 Nvme1n1 : 5.07 1705.29 6.66 0.00 0.00 74649.07 8670.92 80659.69 00:29:22.539 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0xa0000 length 0xa0000 00:29:22.539 Nvme1n1 : 5.07 1729.81 6.76 0.00 0.00 73458.54 7461.02 75013.51 00:29:22.539 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0x80000 00:29:22.539 Nvme2n1 : 5.07 1704.84 6.66 0.00 0.00 74517.40 8973.39 74206.92 00:29:22.539 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x80000 length 0x80000 00:29:22.539 Nvme2n1 : 5.08 1738.13 6.79 0.00 0.00 73077.81 9729.58 73803.62 00:29:22.539 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0x80000 00:29:22.539 Nvme2n2 : 5.08 1713.35 6.69 0.00 0.00 74124.35 9880.81 74610.22 00:29:22.539 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x80000 length 0x80000 00:29:22.539 Nvme2n2 : 5.08 1736.94 6.78 0.00 0.00 72949.12 11947.72 76223.41 00:29:22.539 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0x80000 00:29:22.539 Nvme2n3 : 5.08 1712.87 6.69 0.00 0.00 73974.51 9779.99 80256.39 00:29:22.539 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x80000 length 0x80000 00:29:22.539 Nvme2n3 : 5.09 1735.77 6.78 0.00 0.00 72813.39 11796.48 79449.80 00:29:22.539 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x0 length 0x20000 00:29:22.539 Nvme3n1 : 5.08 1711.71 6.69 0.00 0.00 73825.50 11443.59 86305.87 00:29:22.539 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:22.539 Verification LBA range: start 0x20000 length 0x20000 00:29:22.539 Nvme3n1 : 5.09 1735.30 6.78 0.00 0.00 72696.83 7461.02 78643.20 00:29:22.539 [2024-12-09T23:11:03.175Z] =================================================================================================================== 00:29:22.539 [2024-12-09T23:11:03.175Z] Total : 20653.53 80.68 0.00 0.00 73722.47 7461.02 86305.87 00:29:23.926 00:29:23.926 real 0m7.559s 00:29:23.926 user 0m13.742s 00:29:23.926 sys 0m0.227s 00:29:23.926 23:11:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.926 ************************************ 00:29:23.926 END TEST bdev_verify 00:29:23.926 ************************************ 00:29:23.926 23:11:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:23.926 23:11:04 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:23.926 23:11:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:23.926 23:11:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.926 23:11:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:23.926 ************************************ 00:29:23.926 START TEST bdev_verify_big_io 00:29:23.926 ************************************ 00:29:23.926 23:11:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:24.185 [2024-12-09 23:11:04.569872] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:24.185 [2024-12-09 23:11:04.570006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:29:24.185 [2024-12-09 23:11:04.729070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:24.449 [2024-12-09 23:11:04.830645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.449 [2024-12-09 23:11:04.830876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.027 Running I/O for 5 seconds... 00:29:28.140 625.00 IOPS, 39.06 MiB/s [2024-12-09T23:11:10.700Z] 1338.00 IOPS, 83.62 MiB/s [2024-12-09T23:11:11.643Z] 1535.33 IOPS, 95.96 MiB/s [2024-12-09T23:11:11.643Z] 2081.50 IOPS, 130.09 MiB/s 00:29:31.007 Latency(us) 00:29:31.007 [2024-12-09T23:11:11.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.007 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0xbd0b 00:29:31.007 Nvme0n1 : 5.65 130.26 8.14 0.00 0.00 952798.85 16535.24 1103424.59 00:29:31.007 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:31.007 Nvme0n1 : 5.48 116.81 7.30 0.00 0.00 1048982.92 29642.44 1109877.37 00:29:31.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0xa000 00:29:31.007 Nvme1n1 : 5.74 130.07 8.13 0.00 0.00 912153.41 85902.57 922746.88 00:29:31.007 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0xa000 length 0xa000 00:29:31.007 Nvme1n1 : 5.74 122.61 7.66 0.00 0.00 978146.86 71787.13 922746.88 00:29:31.007 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0x8000 00:29:31.007 Nvme2n1 : 5.75 133.66 8.35 0.00 0.00 865687.24 92355.35 871124.68 00:29:31.007 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x8000 length 0x8000 00:29:31.007 Nvme2n1 : 5.87 126.47 7.90 0.00 0.00 919241.12 66947.54 884030.23 00:29:31.007 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0x8000 00:29:31.007 Nvme2n2 : 5.87 141.29 8.83 0.00 0.00 792113.16 58074.98 903388.55 00:29:31.007 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x8000 length 0x8000 00:29:31.007 Nvme2n2 : 5.87 126.25 7.89 0.00 0.00 888455.35 66947.54 890483.00 00:29:31.007 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0x8000 00:29:31.007 Nvme2n3 : 5.94 150.93 9.43 0.00 0.00 720443.36 24197.91 935652.43 00:29:31.007 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x8000 length 0x8000 00:29:31.007 Nvme2n3 : 5.91 133.51 8.34 0.00 0.00 819330.60 37708.41 890483.00 00:29:31.007 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x0 length 0x2000 00:29:31.007 Nvme3n1 : 5.98 168.59 10.54 0.00 0.00 625436.56 781.39 1974549.27 00:29:31.007 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:31.007 Verification LBA range: start 0x2000 length 0x2000 00:29:31.007 Nvme3n1 : 5.94 150.81 9.43 0.00 0.00 705718.23 1840.05 884030.23 00:29:31.007 [2024-12-09T23:11:11.643Z] =================================================================================================================== 00:29:31.007 [2024-12-09T23:11:11.643Z] Total : 1631.26 101.95 0.00 0.00 838143.31 781.39 1974549.27 00:29:33.554 00:29:33.554 real 0m9.289s 00:29:33.554 user 0m16.599s 00:29:33.554 sys 0m0.243s 00:29:33.554 23:11:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.554 ************************************ 00:29:33.554 END TEST bdev_verify_big_io 00:29:33.554 23:11:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.554 ************************************ 00:29:33.554 23:11:13 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:33.554 23:11:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:33.554 23:11:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.554 23:11:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.554 ************************************ 00:29:33.554 START TEST bdev_write_zeroes 00:29:33.554 ************************************ 00:29:33.554 23:11:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:33.554 [2024-12-09 23:11:13.903025] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:33.554 [2024-12-09 23:11:13.903148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:29:33.554 [2024-12-09 23:11:14.060809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.554 [2024-12-09 23:11:14.165079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.135 Running I/O for 1 seconds... 00:29:35.526 58688.00 IOPS, 229.25 MiB/s 00:29:35.526 Latency(us) 00:29:35.526 [2024-12-09T23:11:16.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.526 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme0n1 : 1.02 9797.28 38.27 0.00 0.00 13039.00 9074.22 21979.77 00:29:35.526 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme1n1 : 1.02 9785.96 38.23 0.00 0.00 13036.97 9225.45 23492.14 00:29:35.526 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme2n1 : 1.02 9774.86 38.18 0.00 0.00 12992.55 9175.04 21979.77 00:29:35.526 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme2n2 : 1.02 9763.72 38.14 0.00 0.00 12990.42 9225.45 22584.71 00:29:35.526 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme2n3 : 1.02 9752.62 38.10 0.00 0.00 12981.07 9578.34 22181.42 00:29:35.526 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:35.526 Nvme3n1 : 1.02 9679.19 37.81 0.00 0.00 13009.97 8418.86 22685.54 00:29:35.526 [2024-12-09T23:11:16.162Z] =================================================================================================================== 00:29:35.526 [2024-12-09T23:11:16.162Z] Total : 58553.63 228.73 0.00 0.00 13008.33 8418.86 23492.14 00:29:36.098 00:29:36.098 real 0m2.797s 00:29:36.098 user 0m2.492s 00:29:36.098 sys 0m0.188s 00:29:36.098 23:11:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.098 23:11:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:36.098 ************************************ 00:29:36.098 END TEST bdev_write_zeroes 00:29:36.098 ************************************ 00:29:36.098 23:11:16 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:36.098 23:11:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:36.098 23:11:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.098 23:11:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:36.098 ************************************ 00:29:36.098 START TEST bdev_json_nonenclosed 00:29:36.098 ************************************ 00:29:36.098 23:11:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:36.360 [2024-12-09 23:11:16.746970] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:36.360 [2024-12-09 23:11:16.747092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:29:36.360 [2024-12-09 23:11:16.906125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.622 [2024-12-09 23:11:17.026494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.622 [2024-12-09 23:11:17.026578] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:36.622 [2024-12-09 23:11:17.026596] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:36.622 [2024-12-09 23:11:17.026605] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.622 00:29:36.622 real 0m0.533s 00:29:36.622 user 0m0.323s 00:29:36.622 sys 0m0.105s 00:29:36.622 ************************************ 00:29:36.622 END TEST bdev_json_nonenclosed 00:29:36.622 ************************************ 00:29:36.622 23:11:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.622 23:11:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:36.622 23:11:17 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:36.622 23:11:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:36.622 23:11:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.622 23:11:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:36.622 ************************************ 00:29:36.622 START TEST bdev_json_nonarray 00:29:36.622 ************************************ 00:29:36.622 23:11:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:36.883 [2024-12-09 23:11:17.311907] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:36.883 [2024-12-09 23:11:17.312031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:29:36.883 [2024-12-09 23:11:17.476194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.144 [2024-12-09 23:11:17.579636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.144 [2024-12-09 23:11:17.579725] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:37.144 [2024-12-09 23:11:17.579743] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:37.144 [2024-12-09 23:11:17.579752] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:37.144 00:29:37.144 real 0m0.511s 00:29:37.144 user 0m0.313s 00:29:37.144 sys 0m0.094s 00:29:37.144 23:11:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.144 23:11:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:37.144 ************************************ 00:29:37.144 END TEST bdev_json_nonarray 00:29:37.144 ************************************ 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:37.406 23:11:17 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:37.406 00:29:37.406 real 0m37.714s 00:29:37.406 user 0m57.798s 00:29:37.406 sys 0m5.097s 00:29:37.406 ************************************ 00:29:37.406 END TEST blockdev_nvme 00:29:37.406 ************************************ 00:29:37.406 23:11:17 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.406 23:11:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:37.406 23:11:17 -- spdk/autotest.sh@209 -- # uname -s 00:29:37.406 23:11:17 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:37.406 23:11:17 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:37.406 23:11:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.406 23:11:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.406 23:11:17 -- common/autotest_common.sh@10 -- # set +x 00:29:37.406 ************************************ 00:29:37.406 START TEST blockdev_nvme_gpt 00:29:37.406 ************************************ 00:29:37.406 23:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:37.406 * Looking for test storage... 00:29:37.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:37.406 23:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.406 23:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.406 23:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.406 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.406 23:11:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:37.406 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.406 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.406 --rc genhtml_branch_coverage=1 00:29:37.406 --rc genhtml_function_coverage=1 00:29:37.406 --rc genhtml_legend=1 00:29:37.406 --rc geninfo_all_blocks=1 00:29:37.406 --rc geninfo_unexecuted_blocks=1 00:29:37.406 00:29:37.406 ' 00:29:37.406 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.406 --rc genhtml_branch_coverage=1 00:29:37.406 --rc genhtml_function_coverage=1 00:29:37.406 --rc genhtml_legend=1 00:29:37.406 --rc geninfo_all_blocks=1 00:29:37.407 --rc geninfo_unexecuted_blocks=1 00:29:37.407 00:29:37.407 ' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.407 --rc genhtml_branch_coverage=1 00:29:37.407 --rc genhtml_function_coverage=1 00:29:37.407 --rc genhtml_legend=1 00:29:37.407 --rc geninfo_all_blocks=1 00:29:37.407 --rc geninfo_unexecuted_blocks=1 00:29:37.407 00:29:37.407 ' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.407 --rc genhtml_branch_coverage=1 00:29:37.407 --rc genhtml_function_coverage=1 00:29:37.407 --rc genhtml_legend=1 00:29:37.407 --rc geninfo_all_blocks=1 00:29:37.407 --rc geninfo_unexecuted_blocks=1 00:29:37.407 00:29:37.407 ' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60754 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60754 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60754 ']' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.407 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.407 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:37.668 [2024-12-09 23:11:18.109947] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:37.668 [2024-12-09 23:11:18.110080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60754 ] 00:29:37.668 [2024-12-09 23:11:18.266332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.930 [2024-12-09 23:11:18.370290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.504 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.504 23:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:29:38.504 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:29:38.504 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:29:38.504 23:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:38.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:38.766 Waiting for block devices as requested 00:29:39.027 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:39.027 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:39.027 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:39.027 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:44.324 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:44.324 BYT; 00:29:44.324 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:44.324 BYT; 00:29:44.324 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:29:44.324 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:44.325 23:11:24 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:44.325 23:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:45.276 The operation has completed successfully. 00:29:45.276 23:11:25 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:46.231 The operation has completed successfully. 00:29:46.231 23:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:46.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:47.372 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:47.372 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:47.372 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:47.372 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:47.372 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:29:47.372 23:11:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.372 23:11:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.372 [] 00:29:47.372 23:11:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.372 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:29:47.372 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:29:47.372 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:47.372 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:47.373 23:11:27 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:47.373 23:11:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.373 23:11:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:29:47.632 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.632 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:47.891 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "282c4db1-e55d-47c5-849b-2ae10e71b70b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "282c4db1-e55d-47c5-849b-2ae10e71b70b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "81bd9536-0ebd-4139-a54a-60eb97188fc0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81bd9536-0ebd-4139-a54a-60eb97188fc0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7d2a6b40-1aaf-436b-aedd-06fb6e9bae47"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d2a6b40-1aaf-436b-aedd-06fb6e9bae47",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cad8f0ab-b584-41ca-8ff2-fdd7a6a8035b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cad8f0ab-b584-41ca-8ff2-fdd7a6a8035b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ed8d9d1c-65d3-4d99-8e03-8633d44620f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ed8d9d1c-65d3-4d99-8e03-8633d44620f5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:29:47.892 23:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60754 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60754 ']' 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60754 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60754 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.892 killing process with pid 60754 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60754' 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60754 00:29:47.892 23:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60754 00:29:49.276 23:11:29 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:49.276 23:11:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:49.276 23:11:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:49.276 23:11:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.276 23:11:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:49.276 ************************************ 00:29:49.276 START TEST bdev_hello_world 00:29:49.276 ************************************ 00:29:49.276 23:11:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:49.537 [2024-12-09 23:11:29.957443] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:49.537 [2024-12-09 23:11:29.957564] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61380 ] 00:29:49.537 [2024-12-09 23:11:30.114236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.797 [2024-12-09 23:11:30.216764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.368 [2024-12-09 23:11:30.760822] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:50.368 [2024-12-09 23:11:30.760875] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:50.368 [2024-12-09 23:11:30.760897] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:50.368 [2024-12-09 23:11:30.763381] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:50.368 [2024-12-09 23:11:30.763882] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:50.368 [2024-12-09 23:11:30.763913] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:50.368 [2024-12-09 23:11:30.764059] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:50.368 00:29:50.368 [2024-12-09 23:11:30.764078] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:50.939 00:29:50.939 real 0m1.596s 00:29:50.939 user 0m1.322s 00:29:50.939 sys 0m0.167s 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:50.939 ************************************ 00:29:50.939 END TEST bdev_hello_world 00:29:50.939 ************************************ 00:29:50.939 23:11:31 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:29:50.939 23:11:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.939 23:11:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.939 23:11:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:50.939 ************************************ 00:29:50.939 START TEST bdev_bounds 00:29:50.939 ************************************ 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61417 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:50.939 Process bdevio pid: 61417 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61417' 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61417 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61417 ']' 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.939 23:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:51.201 [2024-12-09 23:11:31.594589] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:51.201 [2024-12-09 23:11:31.594718] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61417 ] 00:29:51.201 [2024-12-09 23:11:31.754099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.462 [2024-12-09 23:11:31.859711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.463 [2024-12-09 23:11:31.859787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.463 [2024-12-09 23:11:31.860172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.034 23:11:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.034 23:11:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:29:52.034 23:11:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:52.034 I/O targets: 00:29:52.034 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:52.034 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:52.034 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:52.034 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:52.034 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:52.034 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:52.034 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:52.034 00:29:52.034 00:29:52.034 CUnit - A unit testing framework for C - Version 2.1-3 00:29:52.034 http://cunit.sourceforge.net/ 00:29:52.034 00:29:52.034 00:29:52.034 Suite: bdevio tests on: Nvme3n1 00:29:52.034 Test: blockdev write read block ...passed 00:29:52.034 Test: blockdev write zeroes read block ...passed 00:29:52.034 Test: blockdev write zeroes read no split ...passed 00:29:52.034 Test: blockdev write zeroes read split ...passed 00:29:52.034 Test: blockdev write zeroes read split partial ...passed 00:29:52.034 Test: blockdev reset ...[2024-12-09 23:11:32.659246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:52.034 [2024-12-09 23:11:32.662262] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:52.034 passed 00:29:52.034 Test: blockdev write read 8 blocks ...passed 00:29:52.034 Test: blockdev write read size > 128k ...passed 00:29:52.034 Test: blockdev write read invalid size ...passed 00:29:52.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.034 Test: blockdev write read max offset ...passed 00:29:52.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.034 Test: blockdev writev readv 8 blocks ...passed 00:29:52.034 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.034 Test: blockdev writev readv block ...passed 00:29:52.034 Test: blockdev writev readv size > 128k ...passed 00:29:52.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.034 Test: blockdev comparev and writev ...[2024-12-09 23:11:32.668001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a04000 len:0x1000 00:29:52.034 passed 00:29:52.034 Test: blockdev nvme passthru rw ...[2024-12-09 23:11:32.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:52.034 passed 00:29:52.315 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:11:32.668669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:52.315 [2024-12-09 23:11:32.668689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:52.315 passed 00:29:52.315 Test: blockdev nvme admin passthru ...passed 00:29:52.315 Test: blockdev copy ...passed 00:29:52.315 Suite: bdevio tests on: Nvme2n3 00:29:52.315 Test: blockdev write read block ...passed 00:29:52.315 Test: blockdev write zeroes read block ...passed 00:29:52.315 Test: blockdev write zeroes read no split ...passed 00:29:52.315 Test: blockdev write zeroes read split ...passed 00:29:52.315 Test: blockdev write zeroes read split partial ...passed 00:29:52.315 Test: blockdev reset ...[2024-12-09 23:11:32.884220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:52.315 [2024-12-09 23:11:32.887732] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:52.315 passed 00:29:52.315 Test: blockdev write read 8 blocks ...passed 00:29:52.315 Test: blockdev write read size > 128k ...passed 00:29:52.315 Test: blockdev write read invalid size ...passed 00:29:52.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.315 Test: blockdev write read max offset ...passed 00:29:52.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.315 Test: blockdev writev readv 8 blocks ...passed 00:29:52.315 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.315 Test: blockdev writev readv block ...passed 00:29:52.315 Test: blockdev writev readv size > 128k ...passed 00:29:52.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.316 Test: blockdev comparev and writev ...[2024-12-09 23:11:32.893875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a02000 len:0x1000 00:29:52.316 [2024-12-09 23:11:32.893916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:52.316 passed 00:29:52.316 Test: blockdev nvme passthru rw ...passed 00:29:52.316 Test: blockdev nvme passthru vendor specific ...passed 00:29:52.316 Test: blockdev nvme admin passthru ...[2024-12-09 23:11:32.894560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:52.316 [2024-12-09 23:11:32.894581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:52.316 passed 00:29:52.316 Test: blockdev copy ...passed 00:29:52.316 Suite: bdevio tests on: Nvme2n2 00:29:52.316 Test: blockdev write read block ...passed 00:29:52.581 Test: blockdev write zeroes read block ...passed 00:29:52.581 Test: blockdev write zeroes read no split ...passed 00:29:52.581 Test: blockdev write zeroes read split ...passed 00:29:52.581 Test: blockdev write zeroes read split partial ...passed 00:29:52.581 Test: blockdev reset ...[2024-12-09 23:11:33.161083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:52.581 [2024-12-09 23:11:33.164174] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:52.581 passed 00:29:52.581 Test: blockdev write read 8 blocks ...passed 00:29:52.581 Test: blockdev write read size > 128k ...passed 00:29:52.581 Test: blockdev write read invalid size ...passed 00:29:52.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.581 Test: blockdev write read max offset ...passed 00:29:52.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.581 Test: blockdev writev readv 8 blocks ...passed 00:29:52.581 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.581 Test: blockdev writev readv block ...passed 00:29:52.581 Test: blockdev writev readv size > 128k ...passed 00:29:52.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.581 Test: blockdev comparev and writev ...[2024-12-09 23:11:33.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da638000 len:0x1000 00:29:52.581 [2024-12-09 23:11:33.170334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:52.581 passed 00:29:52.581 Test: blockdev nvme passthru rw ...passed 00:29:52.581 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:11:33.170900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:52.581 [2024-12-09 23:11:33.170924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:52.581 passed 00:29:52.581 Test: blockdev nvme admin passthru ...passed 00:29:52.581 Test: blockdev copy ...passed 00:29:52.581 Suite: bdevio tests on: Nvme2n1 00:29:52.581 Test: blockdev write read block ...passed 00:29:52.842 Test: blockdev write zeroes read block ...passed 00:29:52.842 Test: blockdev write zeroes read no split ...passed 00:29:52.842 Test: blockdev write zeroes read split ...passed 00:29:52.842 Test: blockdev write zeroes read split partial ...passed 00:29:52.842 Test: blockdev reset ...[2024-12-09 23:11:33.445957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:52.842 passed 00:29:52.842 Test: blockdev write read 8 blocks ...passed 00:29:52.842 Test: blockdev write read size > 128k ...[2024-12-09 23:11:33.448960] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:52.842 passed 00:29:52.842 Test: blockdev write read invalid size ...passed 00:29:52.842 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.842 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.842 Test: blockdev write read max offset ...passed 00:29:52.842 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.842 Test: blockdev writev readv 8 blocks ...passed 00:29:52.842 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.842 Test: blockdev writev readv block ...passed 00:29:52.842 Test: blockdev writev readv size > 128k ...passed 00:29:52.842 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.842 Test: blockdev comparev and writev ...[2024-12-09 23:11:33.454410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da634000 len:0x1000 00:29:52.842 passed 00:29:52.842 Test: blockdev nvme passthru rw ...[2024-12-09 23:11:33.454451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:52.842 passed 00:29:52.842 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:11:33.454871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:52.842 passed 00:29:52.842 Test: blockdev nvme admin passthru ...[2024-12-09 23:11:33.454898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:52.842 passed 00:29:52.842 Test: blockdev copy ...passed 00:29:52.842 Suite: bdevio tests on: Nvme1n1p2 00:29:52.842 Test: blockdev write read block ...passed 00:29:53.420 Test: blockdev write zeroes read block ...passed 00:29:53.420 Test: blockdev write zeroes read no split ...passed 00:29:53.420 Test: blockdev write zeroes read split ...passed 00:29:53.420 Test: blockdev write zeroes read split partial ...passed 00:29:53.420 Test: blockdev reset ...passed 00:29:53.420 Test: blockdev write read 8 blocks ...passed 00:29:53.420 Test: blockdev write read size > 128k ...passed 00:29:53.420 Test: blockdev write read invalid size ...passed 00:29:53.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:53.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:53.420 Test: blockdev write read max offset ...passed 00:29:53.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:53.420 Test: blockdev writev readv 8 blocks ...passed 00:29:53.420 Test: blockdev writev readv 30 x 1block ...passed 00:29:53.420 Test: blockdev writev readv block ...passed 00:29:53.420 Test: blockdev writev readv size > 128k ...passed 00:29:53.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:53.420 Test: blockdev comparev and writev ...passed 00:29:53.420 Test: blockdev nvme passthru rw ...passed 00:29:53.420 Test: blockdev nvme passthru vendor specific ...passed 00:29:53.420 Test: blockdev nvme admin passthru ...passed 00:29:53.420 Test: blockdev copy ...passed 00:29:53.420 Suite: bdevio tests on: Nvme1n1p1 00:29:53.420 Test: blockdev write read block ...passed 00:29:53.420 Test: blockdev write zeroes read block ...[2024-12-09 23:11:33.735214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:53.420 [2024-12-09 23:11:33.739413] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:53.420 [2024-12-09 23:11:33.760405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2da630000 len:0x1000 00:29:53.420 [2024-12-09 23:11:33.760447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:53.420 passed 00:29:53.420 Test: blockdev write zeroes read no split ...passed 00:29:53.420 Test: blockdev write zeroes read split ...passed 00:29:53.420 Test: blockdev write zeroes read split partial ...passed 00:29:53.420 Test: blockdev reset ...[2024-12-09 23:11:34.019904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:53.420 [2024-12-09 23:11:34.022462] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:53.420 passed 00:29:53.420 Test: blockdev write read 8 blocks ...passed 00:29:53.420 Test: blockdev write read size > 128k ...passed 00:29:53.420 Test: blockdev write read invalid size ...passed 00:29:53.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:53.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:53.420 Test: blockdev write read max offset ...passed 00:29:53.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:53.420 Test: blockdev writev readv 8 blocks ...passed 00:29:53.421 Test: blockdev writev readv 30 x 1block ...passed 00:29:53.421 Test: blockdev writev readv block ...passed 00:29:53.421 Test: blockdev writev readv size > 128k ...passed 00:29:53.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:53.421 Test: blockdev comparev and writev ...[2024-12-09 23:11:34.028941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c600e000 len:0x1000 00:29:53.421 [2024-12-09 23:11:34.028995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:53.421 passed 00:29:53.421 Test: blockdev nvme passthru rw ...passed 00:29:53.421 Test: blockdev nvme passthru vendor specific ...passed 00:29:53.421 Test: blockdev nvme admin passthru ...passed 00:29:53.421 Test: blockdev copy ...passed 00:29:53.421 Suite: bdevio tests on: Nvme0n1 00:29:53.421 Test: blockdev write read block ...passed 00:29:53.682 Test: blockdev write zeroes read block ...passed 00:29:53.943 Test: blockdev write zeroes read no split ...passed 00:29:53.943 Test: blockdev write zeroes read split ...passed 00:29:53.943 Test: blockdev write zeroes read split partial ...passed 00:29:53.943 Test: blockdev reset ...[2024-12-09 23:11:34.444058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:53.943 [2024-12-09 23:11:34.446595] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:53.943 passed 00:29:53.943 Test: blockdev write read 8 blocks ...passed 00:29:53.943 Test: blockdev write read size > 128k ...passed 00:29:53.943 Test: blockdev write read invalid size ...passed 00:29:53.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:53.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:53.943 Test: blockdev write read max offset ...passed 00:29:53.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:53.943 Test: blockdev writev readv 8 blocks ...passed 00:29:53.943 Test: blockdev writev readv 30 x 1block ...passed 00:29:53.943 Test: blockdev writev readv block ...passed 00:29:53.943 Test: blockdev writev readv size > 128k ...passed 00:29:53.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:53.943 Test: blockdev comparev and writev ...[2024-12-09 23:11:34.452040] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:53.943 separate metadata which is not supported yet. 00:29:53.943 passed 00:29:53.943 Test: blockdev nvme passthru rw ...passed 00:29:53.943 Test: blockdev nvme passthru vendor specific ...passed 00:29:53.943 Test: blockdev nvme admin passthru ...[2024-12-09 23:11:34.452459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:53.943 [2024-12-09 23:11:34.452490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:53.943 passed 00:29:53.943 Test: blockdev copy ...passed 00:29:53.943 00:29:53.943 Run Summary: Type Total Ran Passed Failed Inactive 00:29:53.943 suites 7 7 n/a 0 0 00:29:53.943 tests 161 161 161 0 0 00:29:53.943 asserts 1025 1025 1025 0 n/a 00:29:53.943 00:29:53.943 Elapsed time = 4.132 seconds 00:29:53.943 0 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61417 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61417 ']' 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61417 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61417 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61417' 00:29:53.943 killing process with pid 61417 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61417 00:29:53.943 23:11:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61417 00:29:54.514 23:11:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:54.514 00:29:54.514 real 0m3.557s 00:29:54.514 user 0m8.303s 00:29:54.514 sys 0m0.298s 00:29:54.514 23:11:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.514 ************************************ 00:29:54.514 END TEST bdev_bounds 00:29:54.514 ************************************ 00:29:54.514 23:11:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:54.514 23:11:35 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:54.514 23:11:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:54.514 23:11:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.514 23:11:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:54.776 ************************************ 00:29:54.776 START TEST bdev_nbd 00:29:54.776 ************************************ 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61488 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61488 /var/tmp/spdk-nbd.sock 00:29:54.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61488 ']' 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.776 23:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:54.776 [2024-12-09 23:11:35.223044] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:54.776 [2024-12-09 23:11:35.223164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.776 [2024-12-09 23:11:35.378397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.037 [2024-12-09 23:11:35.462959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.610 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.610 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:29:55.610 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:55.610 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.610 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:55.611 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.875 1+0 records in 00:29:55.875 1+0 records out 00:29:55.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274918 s, 14.9 MB/s 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:55.875 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.875 1+0 records in 00:29:55.876 1+0 records out 00:29:55.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311425 s, 13.2 MB/s 00:29:55.876 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.138 1+0 records in 00:29:56.138 1+0 records out 00:29:56.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245602 s, 16.7 MB/s 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:56.138 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:56.399 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.400 1+0 records in 00:29:56.400 1+0 records out 00:29:56.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443307 s, 9.2 MB/s 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:56.400 23:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.662 1+0 records in 00:29:56.662 1+0 records out 00:29:56.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056714 s, 7.2 MB/s 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:56.662 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.924 1+0 records in 00:29:56.924 1+0 records out 00:29:56.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382561 s, 10.7 MB/s 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:56.924 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:57.186 1+0 records in 00:29:57.186 1+0 records out 00:29:57.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587917 s, 7.0 MB/s 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:57.186 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd0", 00:29:57.448 "bdev_name": "Nvme0n1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd1", 00:29:57.448 "bdev_name": "Nvme1n1p1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd2", 00:29:57.448 "bdev_name": "Nvme1n1p2" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd3", 00:29:57.448 "bdev_name": "Nvme2n1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd4", 00:29:57.448 "bdev_name": "Nvme2n2" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd5", 00:29:57.448 "bdev_name": "Nvme2n3" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd6", 00:29:57.448 "bdev_name": "Nvme3n1" 00:29:57.448 } 00:29:57.448 ]' 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd0", 00:29:57.448 "bdev_name": "Nvme0n1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd1", 00:29:57.448 "bdev_name": "Nvme1n1p1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd2", 00:29:57.448 "bdev_name": "Nvme1n1p2" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd3", 00:29:57.448 "bdev_name": "Nvme2n1" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd4", 00:29:57.448 "bdev_name": "Nvme2n2" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd5", 00:29:57.448 "bdev_name": "Nvme2n3" 00:29:57.448 }, 00:29:57.448 { 00:29:57.448 "nbd_device": "/dev/nbd6", 00:29:57.448 "bdev_name": "Nvme3n1" 00:29:57.448 } 00:29:57.448 ]' 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.448 23:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.708 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.970 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.233 23:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.494 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.755 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:59.017 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:59.279 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:59.279 /dev/nbd0 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.540 1+0 records in 00:29:59.540 1+0 records out 00:29:59.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551786 s, 7.4 MB/s 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:59.540 23:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:29:59.540 /dev/nbd1 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.540 1+0 records in 00:29:59.540 1+0 records out 00:29:59.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380419 s, 10.8 MB/s 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:59.540 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:29:59.802 /dev/nbd10 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.802 1+0 records in 00:29:59.802 1+0 records out 00:29:59.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458516 s, 8.9 MB/s 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:59.802 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:30:00.063 /dev/nbd11 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.063 1+0 records in 00:30:00.063 1+0 records out 00:30:00.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339671 s, 12.1 MB/s 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:00.063 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:30:00.324 /dev/nbd12 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.324 1+0 records in 00:30:00.324 1+0 records out 00:30:00.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436031 s, 9.4 MB/s 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:00.324 23:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:30:00.586 /dev/nbd13 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.586 1+0 records in 00:30:00.586 1+0 records out 00:30:00.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421057 s, 9.7 MB/s 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:00.586 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:30:00.847 /dev/nbd14 00:30:00.847 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:30:00.847 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:30:00.847 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:30:00.847 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.848 1+0 records in 00:30:00.848 1+0 records out 00:30:00.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404218 s, 10.1 MB/s 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd0", 00:30:00.848 "bdev_name": "Nvme0n1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd1", 00:30:00.848 "bdev_name": "Nvme1n1p1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd10", 00:30:00.848 "bdev_name": "Nvme1n1p2" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd11", 00:30:00.848 "bdev_name": "Nvme2n1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd12", 00:30:00.848 "bdev_name": "Nvme2n2" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd13", 00:30:00.848 "bdev_name": "Nvme2n3" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd14", 00:30:00.848 "bdev_name": "Nvme3n1" 00:30:00.848 } 00:30:00.848 ]' 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:00.848 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd0", 00:30:00.848 "bdev_name": "Nvme0n1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd1", 00:30:00.848 "bdev_name": "Nvme1n1p1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd10", 00:30:00.848 "bdev_name": "Nvme1n1p2" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd11", 00:30:00.848 "bdev_name": "Nvme2n1" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd12", 00:30:00.848 "bdev_name": "Nvme2n2" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd13", 00:30:00.848 "bdev_name": "Nvme2n3" 00:30:00.848 }, 00:30:00.848 { 00:30:00.848 "nbd_device": "/dev/nbd14", 00:30:00.848 "bdev_name": "Nvme3n1" 00:30:00.848 } 00:30:00.848 ]' 00:30:01.109 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:01.109 /dev/nbd1 00:30:01.109 /dev/nbd10 00:30:01.109 /dev/nbd11 00:30:01.110 /dev/nbd12 00:30:01.110 /dev/nbd13 00:30:01.110 /dev/nbd14' 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:01.110 /dev/nbd1 00:30:01.110 /dev/nbd10 00:30:01.110 /dev/nbd11 00:30:01.110 /dev/nbd12 00:30:01.110 /dev/nbd13 00:30:01.110 /dev/nbd14' 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:01.110 256+0 records in 00:30:01.110 256+0 records out 00:30:01.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722331 s, 145 MB/s 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:01.110 256+0 records in 00:30:01.110 256+0 records out 00:30:01.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104599 s, 10.0 MB/s 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:01.110 256+0 records in 00:30:01.110 256+0 records out 00:30:01.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0884101 s, 11.9 MB/s 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.110 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:01.406 256+0 records in 00:30:01.406 256+0 records out 00:30:01.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0873016 s, 12.0 MB/s 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:01.406 256+0 records in 00:30:01.406 256+0 records out 00:30:01.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0872173 s, 12.0 MB/s 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:30:01.406 256+0 records in 00:30:01.406 256+0 records out 00:30:01.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0809915 s, 12.9 MB/s 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.406 23:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:30:01.668 256+0 records in 00:30:01.668 256+0 records out 00:30:01.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0798098 s, 13.1 MB/s 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:30:01.668 256+0 records in 00:30:01.668 256+0 records out 00:30:01.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0772729 s, 13.6 MB/s 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.668 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.929 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.192 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.193 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.193 23:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.458 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.721 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:02.983 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:03.244 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:30:03.245 23:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:03.505 malloc_lvol_verify 00:30:03.505 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:03.789 7f7d579f-bdd6-4850-8278-972c6a8f5b23 00:30:03.789 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:04.050 0d2a1605-0ca8-4423-b473-8d3d58553460 00:30:04.050 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:04.311 /dev/nbd0 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:30:04.311 mke2fs 1.47.0 (5-Feb-2023) 00:30:04.311 Discarding device blocks: 0/4096 done 00:30:04.311 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:04.311 00:30:04.311 Allocating group tables: 0/1 done 00:30:04.311 Writing inode tables: 0/1 done 00:30:04.311 Creating journal (1024 blocks): done 00:30:04.311 Writing superblocks and filesystem accounting information: 0/1 done 00:30:04.311 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:04.311 23:11:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61488 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61488 ']' 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61488 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61488 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.574 killing process with pid 61488 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61488' 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61488 00:30:04.574 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61488 00:30:05.513 23:11:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:30:05.513 00:30:05.513 real 0m10.705s 00:30:05.513 user 0m15.254s 00:30:05.513 sys 0m3.464s 00:30:05.513 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.513 23:11:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:05.513 ************************************ 00:30:05.513 END TEST bdev_nbd 00:30:05.513 ************************************ 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:30:05.513 skipping fio tests on NVMe due to multi-ns failures. 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:05.513 23:11:45 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:05.513 23:11:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:05.513 23:11:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.513 23:11:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:05.513 ************************************ 00:30:05.513 START TEST bdev_verify 00:30:05.513 ************************************ 00:30:05.513 23:11:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:05.513 [2024-12-09 23:11:45.975104] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:05.513 [2024-12-09 23:11:45.975228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:30:05.513 [2024-12-09 23:11:46.135929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.774 [2024-12-09 23:11:46.237263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.774 [2024-12-09 23:11:46.237514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.375 Running I/O for 5 seconds... 00:30:08.701 20416.00 IOPS, 79.75 MiB/s [2024-12-09T23:11:50.301Z] 22176.00 IOPS, 86.62 MiB/s [2024-12-09T23:11:51.245Z] 21589.33 IOPS, 84.33 MiB/s [2024-12-09T23:11:52.190Z] 21904.00 IOPS, 85.56 MiB/s [2024-12-09T23:11:52.190Z] 22208.00 IOPS, 86.75 MiB/s 00:30:11.554 Latency(us) 00:30:11.554 [2024-12-09T23:11:52.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.554 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0xbd0bd 00:30:11.554 Nvme0n1 : 5.05 1545.95 6.04 0.00 0.00 82476.44 16535.24 109697.18 00:30:11.554 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:11.554 Nvme0n1 : 5.08 1575.26 6.15 0.00 0.00 80511.97 7612.26 66947.54 00:30:11.554 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x4ff80 00:30:11.554 Nvme1n1p1 : 5.05 1545.46 6.04 0.00 0.00 82317.58 17644.31 108083.99 00:30:11.554 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:11.554 Nvme1n1p1 : 5.08 1574.42 6.15 0.00 0.00 80373.16 8620.50 66140.95 00:30:11.554 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x4ff7f 00:30:11.554 Nvme1n1p2 : 5.08 1550.29 6.06 0.00 0.00 81881.91 5646.18 99614.72 00:30:11.554 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:11.554 Nvme1n1p2 : 5.10 1581.37 6.18 0.00 0.00 79998.76 15123.69 69770.63 00:30:11.554 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x80000 00:30:11.554 Nvme2n1 : 5.08 1549.83 6.05 0.00 0.00 81735.43 6024.27 93161.94 00:30:11.554 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x80000 length 0x80000 00:30:11.554 Nvme2n1 : 5.10 1580.95 6.18 0.00 0.00 79857.76 12199.78 74206.92 00:30:11.554 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x80000 00:30:11.554 Nvme2n2 : 5.09 1558.96 6.09 0.00 0.00 81253.45 7914.73 98808.12 00:30:11.554 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x80000 length 0x80000 00:30:11.554 Nvme2n2 : 5.05 1570.76 6.14 0.00 0.00 81179.40 16031.11 85499.27 00:30:11.554 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x80000 00:30:11.554 Nvme2n3 : 5.09 1558.55 6.09 0.00 0.00 81110.23 7965.14 104051.00 00:30:11.554 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x80000 length 0x80000 00:30:11.554 Nvme2n3 : 5.06 1569.44 6.13 0.00 0.00 81054.12 16535.24 75820.11 00:30:11.554 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x0 length 0x20000 00:30:11.554 Nvme3n1 : 5.09 1558.14 6.09 0.00 0.00 80952.16 8166.79 110503.78 00:30:11.554 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.554 Verification LBA range: start 0x20000 length 0x20000 00:30:11.554 Nvme3n1 : 5.06 1569.01 6.13 0.00 0.00 80940.09 16434.41 70173.93 00:30:11.554 [2024-12-09T23:11:52.190Z] =================================================================================================================== 00:30:11.554 [2024-12-09T23:11:52.190Z] Total : 21888.38 85.50 0.00 0.00 81110.06 5646.18 110503.78 00:30:12.936 00:30:12.936 real 0m7.649s 00:30:12.936 user 0m13.728s 00:30:12.936 sys 0m0.236s 00:30:12.936 23:11:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.936 ************************************ 00:30:12.936 END TEST bdev_verify 00:30:12.936 ************************************ 00:30:12.936 23:11:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:13.196 23:11:53 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:13.196 23:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:13.196 23:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.196 23:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:13.196 ************************************ 00:30:13.196 START TEST bdev_verify_big_io 00:30:13.196 ************************************ 00:30:13.196 23:11:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:13.196 [2024-12-09 23:11:53.657752] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:13.196 [2024-12-09 23:11:53.657871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62000 ] 00:30:13.196 [2024-12-09 23:11:53.818805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.458 [2024-12-09 23:11:53.930870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.458 [2024-12-09 23:11:53.930876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.029 Running I/O for 5 seconds... 00:30:19.122 480.00 IOPS, 30.00 MiB/s [2024-12-09T23:12:01.146Z] 1501.50 IOPS, 93.84 MiB/s [2024-12-09T23:12:01.146Z] 2255.33 IOPS, 140.96 MiB/s 00:30:20.510 Latency(us) 00:30:20.510 [2024-12-09T23:12:01.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.510 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x0 length 0xbd0b 00:30:20.510 Nvme0n1 : 5.86 103.83 6.49 0.00 0.00 1163969.19 22887.19 1387346.71 00:30:20.510 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0xbd0b length 0xbd0b 00:30:20.510 Nvme0n1 : 5.84 92.37 5.77 0.00 0.00 1306360.97 31255.63 1393799.48 00:30:20.510 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x0 length 0x4ff8 00:30:20.510 Nvme1n1p1 : 5.86 108.91 6.81 0.00 0.00 1093098.13 109697.18 1193763.45 00:30:20.510 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:20.510 Nvme1n1p1 : 5.93 95.73 5.98 0.00 0.00 1239485.99 95581.74 1251838.42 00:30:20.510 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x0 length 0x4ff7 00:30:20.510 Nvme1n1p2 : 5.86 109.22 6.83 0.00 0.00 1052237.82 131475.30 993727.41 00:30:20.510 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:20.510 Nvme1n1p2 : 6.10 100.85 6.30 0.00 0.00 1141980.25 76626.71 1064707.94 00:30:20.510 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.510 Verification LBA range: start 0x0 length 0x8000 00:30:20.510 Nvme2n1 : 6.04 116.55 7.28 0.00 0.00 958000.84 62914.56 1019538.51 00:30:20.511 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x8000 length 0x8000 00:30:20.511 Nvme2n1 : 6.10 101.16 6.32 0.00 0.00 1099278.20 78239.90 1096971.82 00:30:20.511 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x0 length 0x8000 00:30:20.511 Nvme2n2 : 6.12 121.00 7.56 0.00 0.00 890910.52 53235.40 1045349.61 00:30:20.511 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x8000 length 0x8000 00:30:20.511 Nvme2n2 : 6.10 104.89 6.56 0.00 0.00 1032816.48 82676.18 1129235.69 00:30:20.511 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x0 length 0x8000 00:30:20.511 Nvme2n3 : 6.20 127.90 7.99 0.00 0.00 813695.46 48799.11 1284102.30 00:30:20.511 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x8000 length 0x8000 00:30:20.511 Nvme2n3 : 6.18 113.99 7.12 0.00 0.00 921852.78 34885.32 1148594.02 00:30:20.511 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x0 length 0x2000 00:30:20.511 Nvme3n1 : 6.23 139.80 8.74 0.00 0.00 723931.10 4285.05 2051982.57 00:30:20.511 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:20.511 Verification LBA range: start 0x2000 length 0x2000 00:30:20.511 Nvme3n1 : 6.23 125.64 7.85 0.00 0.00 810109.60 3150.77 1413157.81 00:30:20.511 [2024-12-09T23:12:01.147Z] =================================================================================================================== 00:30:20.511 [2024-12-09T23:12:01.147Z] Total : 1561.83 97.61 0.00 0.00 995873.13 3150.77 2051982.57 00:30:21.459 00:30:21.459 real 0m8.490s 00:30:21.459 user 0m15.867s 00:30:21.459 sys 0m0.206s 00:30:21.459 23:12:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.459 ************************************ 00:30:21.459 END TEST bdev_verify_big_io 00:30:21.459 ************************************ 00:30:21.459 23:12:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.719 23:12:02 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:21.719 23:12:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:21.719 23:12:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.719 23:12:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:21.719 ************************************ 00:30:21.719 START TEST bdev_write_zeroes 00:30:21.719 ************************************ 00:30:21.719 23:12:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:21.719 [2024-12-09 23:12:02.197375] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:21.719 [2024-12-09 23:12:02.197498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62115 ] 00:30:21.979 [2024-12-09 23:12:02.354809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.979 [2024-12-09 23:12:02.436891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.549 Running I/O for 1 seconds... 00:30:23.549 65408.00 IOPS, 255.50 MiB/s 00:30:23.549 Latency(us) 00:30:23.549 [2024-12-09T23:12:04.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.549 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme0n1 : 1.02 9333.05 36.46 0.00 0.00 13685.56 9931.22 23693.78 00:30:23.549 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme1n1p1 : 1.02 9321.57 36.41 0.00 0.00 13680.31 10939.47 23492.14 00:30:23.549 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme1n1p2 : 1.02 9309.70 36.37 0.00 0.00 13664.98 10435.35 22685.54 00:30:23.549 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme2n1 : 1.03 9298.60 36.32 0.00 0.00 13642.15 8973.39 21979.77 00:30:23.549 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme2n2 : 1.03 9288.03 36.28 0.00 0.00 13636.35 8822.15 21576.47 00:30:23.549 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme2n3 : 1.03 9277.61 36.24 0.00 0.00 13630.11 8267.62 22282.24 00:30:23.549 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:23.549 Nvme3n1 : 1.03 9267.21 36.20 0.00 0.00 13623.40 7914.73 23794.61 00:30:23.549 [2024-12-09T23:12:04.185Z] =================================================================================================================== 00:30:23.549 [2024-12-09T23:12:04.185Z] Total : 65095.77 254.28 0.00 0.00 13651.84 7914.73 23794.61 00:30:24.121 00:30:24.121 real 0m2.618s 00:30:24.121 user 0m2.319s 00:30:24.121 sys 0m0.186s 00:30:24.121 23:12:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.121 23:12:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:24.121 ************************************ 00:30:24.121 END TEST bdev_write_zeroes 00:30:24.121 ************************************ 00:30:24.381 23:12:04 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.381 23:12:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:24.381 23:12:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.381 23:12:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:24.381 ************************************ 00:30:24.381 START TEST bdev_json_nonenclosed 00:30:24.381 ************************************ 00:30:24.381 23:12:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.381 [2024-12-09 23:12:04.852342] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:24.381 [2024-12-09 23:12:04.852464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62162 ] 00:30:24.381 [2024-12-09 23:12:05.012594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.648 [2024-12-09 23:12:05.112862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.648 [2024-12-09 23:12:05.112941] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:24.648 [2024-12-09 23:12:05.112958] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:24.648 [2024-12-09 23:12:05.112967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:24.909 00:30:24.910 real 0m0.503s 00:30:24.910 user 0m0.301s 00:30:24.910 sys 0m0.098s 00:30:24.910 23:12:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.910 23:12:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:24.910 ************************************ 00:30:24.910 END TEST bdev_json_nonenclosed 00:30:24.910 ************************************ 00:30:24.910 23:12:05 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.910 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:24.910 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.910 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:24.910 ************************************ 00:30:24.910 START TEST bdev_json_nonarray 00:30:24.910 ************************************ 00:30:24.910 23:12:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.910 [2024-12-09 23:12:05.389084] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:24.910 [2024-12-09 23:12:05.389208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62188 ] 00:30:25.170 [2024-12-09 23:12:05.550939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.170 [2024-12-09 23:12:05.652661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.170 [2024-12-09 23:12:05.652751] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:25.170 [2024-12-09 23:12:05.652770] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:25.170 [2024-12-09 23:12:05.652779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:25.431 00:30:25.431 real 0m0.506s 00:30:25.431 user 0m0.305s 00:30:25.431 sys 0m0.097s 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:25.431 ************************************ 00:30:25.431 END TEST bdev_json_nonarray 00:30:25.431 ************************************ 00:30:25.431 23:12:05 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:30:25.431 23:12:05 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:30:25.431 23:12:05 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:25.431 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:25.431 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.431 23:12:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:25.431 ************************************ 00:30:25.431 START TEST bdev_gpt_uuid 00:30:25.431 ************************************ 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62213 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62213 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62213 ']' 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.431 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:25.432 23:12:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:25.432 [2024-12-09 23:12:05.955040] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:25.432 [2024-12-09 23:12:05.955163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:30:25.693 [2024-12-09 23:12:06.109439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.693 [2024-12-09 23:12:06.209058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.274 23:12:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.274 23:12:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:30:26.274 23:12:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:26.274 23:12:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.274 23:12:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.536 Some configs were skipped because the RPC state that can call them passed over. 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:30:26.536 { 00:30:26.536 "name": "Nvme1n1p1", 00:30:26.536 "aliases": [ 00:30:26.536 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:26.536 ], 00:30:26.536 "product_name": "GPT Disk", 00:30:26.536 "block_size": 4096, 00:30:26.536 "num_blocks": 655104, 00:30:26.536 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:26.536 "assigned_rate_limits": { 00:30:26.536 "rw_ios_per_sec": 0, 00:30:26.536 "rw_mbytes_per_sec": 0, 00:30:26.536 "r_mbytes_per_sec": 0, 00:30:26.536 "w_mbytes_per_sec": 0 00:30:26.536 }, 00:30:26.536 "claimed": false, 00:30:26.536 "zoned": false, 00:30:26.536 "supported_io_types": { 00:30:26.536 "read": true, 00:30:26.536 "write": true, 00:30:26.536 "unmap": true, 00:30:26.536 "flush": true, 00:30:26.536 "reset": true, 00:30:26.536 "nvme_admin": false, 00:30:26.536 "nvme_io": false, 00:30:26.536 "nvme_io_md": false, 00:30:26.536 "write_zeroes": true, 00:30:26.536 "zcopy": false, 00:30:26.536 "get_zone_info": false, 00:30:26.536 "zone_management": false, 00:30:26.536 "zone_append": false, 00:30:26.536 "compare": true, 00:30:26.536 "compare_and_write": false, 00:30:26.536 "abort": true, 00:30:26.536 "seek_hole": false, 00:30:26.536 "seek_data": false, 00:30:26.536 "copy": true, 00:30:26.536 "nvme_iov_md": false 00:30:26.536 }, 00:30:26.536 "driver_specific": { 00:30:26.536 "gpt": { 00:30:26.536 "base_bdev": "Nvme1n1", 00:30:26.536 "offset_blocks": 256, 00:30:26.536 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:26.536 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:26.536 "partition_name": "SPDK_TEST_first" 00:30:26.536 } 00:30:26.536 } 00:30:26.536 } 00:30:26.536 ]' 00:30:26.536 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.800 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:30:26.800 { 00:30:26.800 "name": "Nvme1n1p2", 00:30:26.800 "aliases": [ 00:30:26.800 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:26.800 ], 00:30:26.800 "product_name": "GPT Disk", 00:30:26.800 "block_size": 4096, 00:30:26.800 "num_blocks": 655103, 00:30:26.800 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:26.800 "assigned_rate_limits": { 00:30:26.800 "rw_ios_per_sec": 0, 00:30:26.800 "rw_mbytes_per_sec": 0, 00:30:26.800 "r_mbytes_per_sec": 0, 00:30:26.800 "w_mbytes_per_sec": 0 00:30:26.800 }, 00:30:26.800 "claimed": false, 00:30:26.800 "zoned": false, 00:30:26.800 "supported_io_types": { 00:30:26.800 "read": true, 00:30:26.800 "write": true, 00:30:26.800 "unmap": true, 00:30:26.800 "flush": true, 00:30:26.800 "reset": true, 00:30:26.800 "nvme_admin": false, 00:30:26.800 "nvme_io": false, 00:30:26.800 "nvme_io_md": false, 00:30:26.800 "write_zeroes": true, 00:30:26.800 "zcopy": false, 00:30:26.800 "get_zone_info": false, 00:30:26.800 "zone_management": false, 00:30:26.800 "zone_append": false, 00:30:26.800 "compare": true, 00:30:26.800 "compare_and_write": false, 00:30:26.800 "abort": true, 00:30:26.800 "seek_hole": false, 00:30:26.800 "seek_data": false, 00:30:26.800 "copy": true, 00:30:26.800 "nvme_iov_md": false 00:30:26.800 }, 00:30:26.800 "driver_specific": { 00:30:26.800 "gpt": { 00:30:26.800 "base_bdev": "Nvme1n1", 00:30:26.800 "offset_blocks": 655360, 00:30:26.800 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:26.800 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:26.800 "partition_name": "SPDK_TEST_second" 00:30:26.800 } 00:30:26.800 } 00:30:26.800 } 00:30:26.800 ]' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62213 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62213 ']' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62213 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62213 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.801 killing process with pid 62213 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62213' 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62213 00:30:26.801 23:12:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62213 00:30:28.715 00:30:28.715 real 0m2.989s 00:30:28.715 user 0m3.146s 00:30:28.715 sys 0m0.354s 00:30:28.715 23:12:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.715 23:12:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:28.715 ************************************ 00:30:28.715 END TEST bdev_gpt_uuid 00:30:28.715 ************************************ 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:30:28.715 23:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:28.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:28.715 Waiting for block devices as requested 00:30:28.715 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.976 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.976 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:34.294 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:34.294 23:12:14 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:30:34.294 23:12:14 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:30:34.294 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:34.294 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:34.294 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:34.294 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:34.294 23:12:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:30:34.294 00:30:34.294 real 0m56.979s 00:30:34.294 user 1m13.474s 00:30:34.294 sys 0m7.577s 00:30:34.294 23:12:14 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.294 23:12:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:34.294 ************************************ 00:30:34.294 END TEST blockdev_nvme_gpt 00:30:34.294 ************************************ 00:30:34.294 23:12:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:34.294 23:12:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:34.294 23:12:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.294 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:30:34.294 ************************************ 00:30:34.294 START TEST nvme 00:30:34.294 ************************************ 00:30:34.294 23:12:14 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:34.561 * Looking for test storage... 00:30:34.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:34.561 23:12:14 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:34.561 23:12:14 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:34.561 23:12:14 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:30:34.561 23:12:15 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:34.561 23:12:15 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.561 23:12:15 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.561 23:12:15 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.561 23:12:15 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.561 23:12:15 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.561 23:12:15 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.561 23:12:15 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.561 23:12:15 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.561 23:12:15 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.561 23:12:15 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.561 23:12:15 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.561 23:12:15 nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:34.561 23:12:15 nvme -- scripts/common.sh@345 -- # : 1 00:30:34.561 23:12:15 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.561 23:12:15 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.561 23:12:15 nvme -- scripts/common.sh@365 -- # decimal 1 00:30:34.561 23:12:15 nvme -- scripts/common.sh@353 -- # local d=1 00:30:34.561 23:12:15 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.562 23:12:15 nvme -- scripts/common.sh@355 -- # echo 1 00:30:34.562 23:12:15 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.562 23:12:15 nvme -- scripts/common.sh@366 -- # decimal 2 00:30:34.562 23:12:15 nvme -- scripts/common.sh@353 -- # local d=2 00:30:34.562 23:12:15 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.562 23:12:15 nvme -- scripts/common.sh@355 -- # echo 2 00:30:34.562 23:12:15 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.562 23:12:15 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.562 23:12:15 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.562 23:12:15 nvme -- scripts/common.sh@368 -- # return 0 00:30:34.562 23:12:15 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.562 23:12:15 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.562 --rc genhtml_branch_coverage=1 00:30:34.562 --rc genhtml_function_coverage=1 00:30:34.562 --rc genhtml_legend=1 00:30:34.562 --rc geninfo_all_blocks=1 00:30:34.562 --rc geninfo_unexecuted_blocks=1 00:30:34.562 00:30:34.562 ' 00:30:34.562 23:12:15 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.562 --rc genhtml_branch_coverage=1 00:30:34.562 --rc genhtml_function_coverage=1 00:30:34.562 --rc genhtml_legend=1 00:30:34.562 --rc geninfo_all_blocks=1 00:30:34.562 --rc geninfo_unexecuted_blocks=1 00:30:34.562 00:30:34.562 ' 00:30:34.562 23:12:15 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.562 --rc genhtml_branch_coverage=1 00:30:34.562 --rc genhtml_function_coverage=1 00:30:34.562 --rc genhtml_legend=1 00:30:34.562 --rc geninfo_all_blocks=1 00:30:34.562 --rc geninfo_unexecuted_blocks=1 00:30:34.562 00:30:34.562 ' 00:30:34.562 23:12:15 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.562 --rc genhtml_branch_coverage=1 00:30:34.562 --rc genhtml_function_coverage=1 00:30:34.562 --rc genhtml_legend=1 00:30:34.562 --rc geninfo_all_blocks=1 00:30:34.562 --rc geninfo_unexecuted_blocks=1 00:30:34.562 00:30:34.562 ' 00:30:34.562 23:12:15 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:34.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:35.391 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:35.391 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:35.391 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:35.391 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:35.649 23:12:16 nvme -- nvme/nvme.sh@79 -- # uname 00:30:35.649 23:12:16 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:35.649 23:12:16 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:35.649 23:12:16 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1075 -- # stubpid=62845 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:30:35.649 Waiting for stub to ready for secondary processes... 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62845 ]] 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:30:35.649 23:12:16 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:35.649 [2024-12-09 23:12:16.072624] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:35.649 [2024-12-09 23:12:16.072753] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:30:36.251 [2024-12-09 23:12:16.822262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:36.508 [2024-12-09 23:12:16.917470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.508 [2024-12-09 23:12:16.917819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.508 [2024-12-09 23:12:16.917844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.508 [2024-12-09 23:12:16.931046] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:30:36.508 [2024-12-09 23:12:16.931086] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:36.508 [2024-12-09 23:12:16.942005] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:36.508 [2024-12-09 23:12:16.942086] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:36.508 [2024-12-09 23:12:16.943521] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:36.508 [2024-12-09 23:12:16.943640] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:30:36.508 [2024-12-09 23:12:16.943678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:30:36.508 [2024-12-09 23:12:16.946137] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:36.508 [2024-12-09 23:12:16.946341] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:30:36.508 [2024-12-09 23:12:16.946433] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:30:36.508 [2024-12-09 23:12:16.949556] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:36.508 [2024-12-09 23:12:16.949779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:30:36.509 [2024-12-09 23:12:16.949874] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:30:36.509 [2024-12-09 23:12:16.949939] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:30:36.509 [2024-12-09 23:12:16.950033] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:30:36.509 23:12:17 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:36.509 done. 00:30:36.509 23:12:17 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:30:36.509 23:12:17 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:36.509 23:12:17 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:30:36.509 23:12:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.509 23:12:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:36.509 ************************************ 00:30:36.509 START TEST nvme_reset 00:30:36.509 ************************************ 00:30:36.509 23:12:17 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:36.767 Initializing NVMe Controllers 00:30:36.767 Skipping QEMU NVMe SSD at 0000:00:10.0 00:30:36.767 Skipping QEMU NVMe SSD at 0000:00:11.0 00:30:36.767 Skipping QEMU NVMe SSD at 0000:00:13.0 00:30:36.767 Skipping QEMU NVMe SSD at 0000:00:12.0 00:30:36.767 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:36.767 00:30:36.767 real 0m0.231s 00:30:36.767 user 0m0.081s 00:30:36.767 sys 0m0.101s 00:30:36.767 23:12:17 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.767 23:12:17 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:30:36.767 ************************************ 00:30:36.767 END TEST nvme_reset 00:30:36.767 ************************************ 00:30:36.767 23:12:17 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:36.767 23:12:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:36.767 23:12:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.767 23:12:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:36.767 ************************************ 00:30:36.767 START TEST nvme_identify 00:30:36.767 ************************************ 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:30:36.767 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:30:36.767 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:36.767 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:36.767 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:30:36.767 23:12:17 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:36.767 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:37.027 [2024-12-09 23:12:17.551687] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62866 terminated unexpected 00:30:37.027 ===================================================== 00:30:37.027 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:37.027 ===================================================== 00:30:37.027 Controller Capabilities/Features 00:30:37.027 ================================ 00:30:37.027 Vendor ID: 1b36 00:30:37.027 Subsystem Vendor ID: 1af4 00:30:37.027 Serial Number: 12340 00:30:37.027 Model Number: QEMU NVMe Ctrl 00:30:37.027 Firmware Version: 8.0.0 00:30:37.027 Recommended Arb Burst: 6 00:30:37.027 IEEE OUI Identifier: 00 54 52 00:30:37.027 Multi-path I/O 00:30:37.027 May have multiple subsystem ports: No 00:30:37.027 May have multiple controllers: No 00:30:37.027 Associated with SR-IOV VF: No 00:30:37.027 Max Data Transfer Size: 524288 00:30:37.027 Max Number of Namespaces: 256 00:30:37.027 Max Number of I/O Queues: 64 00:30:37.027 NVMe Specification Version (VS): 1.4 00:30:37.027 NVMe Specification Version (Identify): 1.4 00:30:37.027 Maximum Queue Entries: 2048 00:30:37.027 Contiguous Queues Required: Yes 00:30:37.027 Arbitration Mechanisms Supported 00:30:37.027 Weighted Round Robin: Not Supported 00:30:37.027 Vendor Specific: Not Supported 00:30:37.027 Reset Timeout: 7500 ms 00:30:37.027 Doorbell Stride: 4 bytes 00:30:37.027 NVM Subsystem Reset: Not Supported 00:30:37.027 Command Sets Supported 00:30:37.027 NVM Command Set: Supported 00:30:37.027 Boot Partition: Not Supported 00:30:37.027 Memory Page Size Minimum: 4096 bytes 00:30:37.027 Memory Page Size Maximum: 65536 bytes 00:30:37.027 Persistent Memory Region: Not Supported 00:30:37.027 Optional Asynchronous Events Supported 00:30:37.027 Namespace Attribute Notices: Supported 00:30:37.027 Firmware Activation Notices: Not Supported 00:30:37.027 ANA Change Notices: Not Supported 00:30:37.027 PLE Aggregate Log Change Notices: Not Supported 00:30:37.027 LBA Status Info Alert Notices: Not Supported 00:30:37.027 EGE Aggregate Log Change Notices: Not Supported 00:30:37.027 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.027 Zone Descriptor Change Notices: Not Supported 00:30:37.027 Discovery Log Change Notices: Not Supported 00:30:37.027 Controller Attributes 00:30:37.027 128-bit Host Identifier: Not Supported 00:30:37.027 Non-Operational Permissive Mode: Not Supported 00:30:37.027 NVM Sets: Not Supported 00:30:37.027 Read Recovery Levels: Not Supported 00:30:37.027 Endurance Groups: Not Supported 00:30:37.027 Predictable Latency Mode: Not Supported 00:30:37.027 Traffic Based Keep ALive: Not Supported 00:30:37.027 Namespace Granularity: Not Supported 00:30:37.027 SQ Associations: Not Supported 00:30:37.027 UUID List: Not Supported 00:30:37.027 Multi-Domain Subsystem: Not Supported 00:30:37.027 Fixed Capacity Management: Not Supported 00:30:37.027 Variable Capacity Management: Not Supported 00:30:37.027 Delete Endurance Group: Not Supported 00:30:37.027 Delete NVM Set: Not Supported 00:30:37.027 Extended LBA Formats Supported: Supported 00:30:37.027 Flexible Data Placement Supported: Not Supported 00:30:37.027 00:30:37.027 Controller Memory Buffer Support 00:30:37.028 ================================ 00:30:37.028 Supported: No 00:30:37.028 00:30:37.028 Persistent Memory Region Support 00:30:37.028 ================================ 00:30:37.028 Supported: No 00:30:37.028 00:30:37.028 Admin Command Set Attributes 00:30:37.028 ============================ 00:30:37.028 Security Send/Receive: Not Supported 00:30:37.028 Format NVM: Supported 00:30:37.028 Firmware Activate/Download: Not Supported 00:30:37.028 Namespace Management: Supported 00:30:37.028 Device Self-Test: Not Supported 00:30:37.028 Directives: Supported 00:30:37.028 NVMe-MI: Not Supported 00:30:37.028 Virtualization Management: Not Supported 00:30:37.028 Doorbell Buffer Config: Supported 00:30:37.028 Get LBA Status Capability: Not Supported 00:30:37.028 Command & Feature Lockdown Capability: Not Supported 00:30:37.028 Abort Command Limit: 4 00:30:37.028 Async Event Request Limit: 4 00:30:37.028 Number of Firmware Slots: N/A 00:30:37.028 Firmware Slot 1 Read-Only: N/A 00:30:37.028 Firmware Activation Without Reset: N/A 00:30:37.028 Multiple Update Detection Support: N/A 00:30:37.028 Firmware Update Granularity: No Information Provided 00:30:37.028 Per-Namespace SMART Log: Yes 00:30:37.028 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.028 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:37.028 Command Effects Log Page: Supported 00:30:37.028 Get Log Page Extended Data: Supported 00:30:37.028 Telemetry Log Pages: Not Supported 00:30:37.028 Persistent Event Log Pages: Not Supported 00:30:37.028 Supported Log Pages Log Page: May Support 00:30:37.028 Commands Supported & Effects Log Page: Not Supported 00:30:37.028 Feature Identifiers & Effects Log Page:May Support 00:30:37.028 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.028 Data Area 4 for Telemetry Log: Not Supported 00:30:37.028 Error Log Page Entries Supported: 1 00:30:37.028 Keep Alive: Not Supported 00:30:37.028 00:30:37.028 NVM Command Set Attributes 00:30:37.028 ========================== 00:30:37.028 Submission Queue Entry Size 00:30:37.028 Max: 64 00:30:37.028 Min: 64 00:30:37.028 Completion Queue Entry Size 00:30:37.028 Max: 16 00:30:37.028 Min: 16 00:30:37.028 Number of Namespaces: 256 00:30:37.028 Compare Command: Supported 00:30:37.028 Write Uncorrectable Command: Not Supported 00:30:37.028 Dataset Management Command: Supported 00:30:37.028 Write Zeroes Command: Supported 00:30:37.028 Set Features Save Field: Supported 00:30:37.028 Reservations: Not Supported 00:30:37.028 Timestamp: Supported 00:30:37.028 Copy: Supported 00:30:37.028 Volatile Write Cache: Present 00:30:37.028 Atomic Write Unit (Normal): 1 00:30:37.028 Atomic Write Unit (PFail): 1 00:30:37.028 Atomic Compare & Write Unit: 1 00:30:37.028 Fused Compare & Write: Not Supported 00:30:37.028 Scatter-Gather List 00:30:37.028 SGL Command Set: Supported 00:30:37.028 SGL Keyed: Not Supported 00:30:37.028 SGL Bit Bucket Descriptor: Not Supported 00:30:37.028 SGL Metadata Pointer: Not Supported 00:30:37.028 Oversized SGL: Not Supported 00:30:37.028 SGL Metadata Address: Not Supported 00:30:37.028 SGL Offset: Not Supported 00:30:37.028 Transport SGL Data Block: Not Supported 00:30:37.028 Replay Protected Memory Block: Not Supported 00:30:37.028 00:30:37.028 Firmware Slot Information 00:30:37.028 ========================= 00:30:37.028 Active slot: 1 00:30:37.028 Slot 1 Firmware Revision: 1.0 00:30:37.028 00:30:37.028 00:30:37.028 Commands Supported and Effects 00:30:37.028 ============================== 00:30:37.028 Admin Commands 00:30:37.028 -------------- 00:30:37.028 Delete I/O Submission Queue (00h): Supported 00:30:37.028 Create I/O Submission Queue (01h): Supported 00:30:37.028 Get Log Page (02h): Supported 00:30:37.028 Delete I/O Completion Queue (04h): Supported 00:30:37.028 Create I/O Completion Queue (05h): Supported 00:30:37.028 Identify (06h): Supported 00:30:37.028 Abort (08h): Supported 00:30:37.028 Set Features (09h): Supported 00:30:37.028 Get Features (0Ah): Supported 00:30:37.028 Asynchronous Event Request (0Ch): Supported 00:30:37.028 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.028 Directive Send (19h): Supported 00:30:37.028 Directive Receive (1Ah): Supported 00:30:37.028 Virtualization Management (1Ch): Supported 00:30:37.028 Doorbell Buffer Config (7Ch): Supported 00:30:37.028 Format NVM (80h): Supported LBA-Change 00:30:37.028 I/O Commands 00:30:37.028 ------------ 00:30:37.028 Flush (00h): Supported LBA-Change 00:30:37.028 Write (01h): Supported LBA-Change 00:30:37.028 Read (02h): Supported 00:30:37.028 Compare (05h): Supported 00:30:37.028 Write Zeroes (08h): Supported LBA-Change 00:30:37.028 Dataset Management (09h): Supported LBA-Change 00:30:37.028 Unknown (0Ch): Supported 00:30:37.028 Unknown (12h): Supported 00:30:37.028 Copy (19h): Supported LBA-Change 00:30:37.028 Unknown (1Dh): Supported LBA-Change 00:30:37.028 00:30:37.028 Error Log 00:30:37.028 ========= 00:30:37.028 00:30:37.028 Arbitration 00:30:37.028 =========== 00:30:37.028 Arbitration Burst: no limit 00:30:37.028 00:30:37.028 Power Management 00:30:37.028 ================ 00:30:37.028 Number of Power States: 1 00:30:37.028 Current Power State: Power State #0 00:30:37.028 Power State #0: 00:30:37.028 Max Power: 25.00 W 00:30:37.028 Non-Operational State: Operational 00:30:37.028 Entry Latency: 16 microseconds 00:30:37.028 Exit Latency: 4 microseconds 00:30:37.028 Relative Read Throughput: 0 00:30:37.028 Relative Read Latency: 0 00:30:37.028 Relative Write Throughput: 0 00:30:37.028 Relative Write Latency: 0 00:30:37.028 Idle Power[2024-12-09 23:12:17.552957] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62866 terminated unexpected 00:30:37.028 : Not Reported 00:30:37.028 Active Power: Not Reported 00:30:37.028 Non-Operational Permissive Mode: Not Supported 00:30:37.028 00:30:37.028 Health Information 00:30:37.028 ================== 00:30:37.028 Critical Warnings: 00:30:37.028 Available Spare Space: OK 00:30:37.028 Temperature: OK 00:30:37.028 Device Reliability: OK 00:30:37.028 Read Only: No 00:30:37.028 Volatile Memory Backup: OK 00:30:37.028 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.028 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.028 Available Spare: 0% 00:30:37.028 Available Spare Threshold: 0% 00:30:37.028 Life Percentage Used: 0% 00:30:37.028 Data Units Read: 669 00:30:37.028 Data Units Written: 597 00:30:37.028 Host Read Commands: 36623 00:30:37.028 Host Write Commands: 36409 00:30:37.028 Controller Busy Time: 0 minutes 00:30:37.028 Power Cycles: 0 00:30:37.028 Power On Hours: 0 hours 00:30:37.028 Unsafe Shutdowns: 0 00:30:37.028 Unrecoverable Media Errors: 0 00:30:37.028 Lifetime Error Log Entries: 0 00:30:37.028 Warning Temperature Time: 0 minutes 00:30:37.028 Critical Temperature Time: 0 minutes 00:30:37.028 00:30:37.028 Number of Queues 00:30:37.028 ================ 00:30:37.028 Number of I/O Submission Queues: 64 00:30:37.028 Number of I/O Completion Queues: 64 00:30:37.028 00:30:37.028 ZNS Specific Controller Data 00:30:37.028 ============================ 00:30:37.028 Zone Append Size Limit: 0 00:30:37.028 00:30:37.028 00:30:37.028 Active Namespaces 00:30:37.028 ================= 00:30:37.028 Namespace ID:1 00:30:37.028 Error Recovery Timeout: Unlimited 00:30:37.028 Command Set Identifier: NVM (00h) 00:30:37.028 Deallocate: Supported 00:30:37.028 Deallocated/Unwritten Error: Supported 00:30:37.028 Deallocated Read Value: All 0x00 00:30:37.028 Deallocate in Write Zeroes: Not Supported 00:30:37.028 Deallocated Guard Field: 0xFFFF 00:30:37.028 Flush: Supported 00:30:37.028 Reservation: Not Supported 00:30:37.028 Metadata Transferred as: Separate Metadata Buffer 00:30:37.028 Namespace Sharing Capabilities: Private 00:30:37.028 Size (in LBAs): 1548666 (5GiB) 00:30:37.028 Capacity (in LBAs): 1548666 (5GiB) 00:30:37.028 Utilization (in LBAs): 1548666 (5GiB) 00:30:37.028 Thin Provisioning: Not Supported 00:30:37.028 Per-NS Atomic Units: No 00:30:37.028 Maximum Single Source Range Length: 128 00:30:37.028 Maximum Copy Length: 128 00:30:37.028 Maximum Source Range Count: 128 00:30:37.029 NGUID/EUI64 Never Reused: No 00:30:37.029 Namespace Write Protected: No 00:30:37.029 Number of LBA Formats: 8 00:30:37.029 Current LBA Format: LBA Format #07 00:30:37.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.029 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.029 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.029 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.029 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.029 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.029 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.029 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.029 00:30:37.029 NVM Specific Namespace Data 00:30:37.029 =========================== 00:30:37.029 Logical Block Storage Tag Mask: 0 00:30:37.029 Protection Information Capabilities: 00:30:37.029 16b Guard Protection Information Storage Tag Support: No 00:30:37.029 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.029 Storage Tag Check Read Support: No 00:30:37.029 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.029 ===================================================== 00:30:37.029 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:37.029 ===================================================== 00:30:37.029 Controller Capabilities/Features 00:30:37.029 ================================ 00:30:37.029 Vendor ID: 1b36 00:30:37.029 Subsystem Vendor ID: 1af4 00:30:37.029 Serial Number: 12341 00:30:37.029 Model Number: QEMU NVMe Ctrl 00:30:37.029 Firmware Version: 8.0.0 00:30:37.029 Recommended Arb Burst: 6 00:30:37.029 IEEE OUI Identifier: 00 54 52 00:30:37.029 Multi-path I/O 00:30:37.029 May have multiple subsystem ports: No 00:30:37.029 May have multiple controllers: No 00:30:37.029 Associated with SR-IOV VF: No 00:30:37.029 Max Data Transfer Size: 524288 00:30:37.029 Max Number of Namespaces: 256 00:30:37.029 Max Number of I/O Queues: 64 00:30:37.029 NVMe Specification Version (VS): 1.4 00:30:37.029 NVMe Specification Version (Identify): 1.4 00:30:37.029 Maximum Queue Entries: 2048 00:30:37.029 Contiguous Queues Required: Yes 00:30:37.029 Arbitration Mechanisms Supported 00:30:37.029 Weighted Round Robin: Not Supported 00:30:37.029 Vendor Specific: Not Supported 00:30:37.029 Reset Timeout: 7500 ms 00:30:37.029 Doorbell Stride: 4 bytes 00:30:37.029 NVM Subsystem Reset: Not Supported 00:30:37.029 Command Sets Supported 00:30:37.029 NVM Command Set: Supported 00:30:37.029 Boot Partition: Not Supported 00:30:37.029 Memory Page Size Minimum: 4096 bytes 00:30:37.029 Memory Page Size Maximum: 65536 bytes 00:30:37.029 Persistent Memory Region: Not Supported 00:30:37.029 Optional Asynchronous Events Supported 00:30:37.029 Namespace Attribute Notices: Supported 00:30:37.029 Firmware Activation Notices: Not Supported 00:30:37.029 ANA Change Notices: Not Supported 00:30:37.029 PLE Aggregate Log Change Notices: Not Supported 00:30:37.029 LBA Status Info Alert Notices: Not Supported 00:30:37.029 EGE Aggregate Log Change Notices: Not Supported 00:30:37.029 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.029 Zone Descriptor Change Notices: Not Supported 00:30:37.029 Discovery Log Change Notices: Not Supported 00:30:37.029 Controller Attributes 00:30:37.029 128-bit Host Identifier: Not Supported 00:30:37.029 Non-Operational Permissive Mode: Not Supported 00:30:37.029 NVM Sets: Not Supported 00:30:37.029 Read Recovery Levels: Not Supported 00:30:37.029 Endurance Groups: Not Supported 00:30:37.029 Predictable Latency Mode: Not Supported 00:30:37.029 Traffic Based Keep ALive: Not Supported 00:30:37.029 Namespace Granularity: Not Supported 00:30:37.029 SQ Associations: Not Supported 00:30:37.029 UUID List: Not Supported 00:30:37.029 Multi-Domain Subsystem: Not Supported 00:30:37.029 Fixed Capacity Management: Not Supported 00:30:37.029 Variable Capacity Management: Not Supported 00:30:37.029 Delete Endurance Group: Not Supported 00:30:37.029 Delete NVM Set: Not Supported 00:30:37.029 Extended LBA Formats Supported: Supported 00:30:37.029 Flexible Data Placement Supported: Not Supported 00:30:37.029 00:30:37.029 Controller Memory Buffer Support 00:30:37.029 ================================ 00:30:37.029 Supported: No 00:30:37.029 00:30:37.029 Persistent Memory Region Support 00:30:37.029 ================================ 00:30:37.029 Supported: No 00:30:37.029 00:30:37.029 Admin Command Set Attributes 00:30:37.029 ============================ 00:30:37.029 Security Send/Receive: Not Supported 00:30:37.029 Format NVM: Supported 00:30:37.029 Firmware Activate/Download: Not Supported 00:30:37.029 Namespace Management: Supported 00:30:37.029 Device Self-Test: Not Supported 00:30:37.029 Directives: Supported 00:30:37.029 NVMe-MI: Not Supported 00:30:37.029 Virtualization Management: Not Supported 00:30:37.029 Doorbell Buffer Config: Supported 00:30:37.029 Get LBA Status Capability: Not Supported 00:30:37.029 Command & Feature Lockdown Capability: Not Supported 00:30:37.029 Abort Command Limit: 4 00:30:37.029 Async Event Request Limit: 4 00:30:37.029 Number of Firmware Slots: N/A 00:30:37.029 Firmware Slot 1 Read-Only: N/A 00:30:37.029 Firmware Activation Without Reset: N/A 00:30:37.029 Multiple Update Detection Support: N/A 00:30:37.029 Firmware Update Granularity: No Information Provided 00:30:37.029 Per-Namespace SMART Log: Yes 00:30:37.029 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.029 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:37.029 Command Effects Log Page: Supported 00:30:37.029 Get Log Page Extended Data: Supported 00:30:37.029 Telemetry Log Pages: Not Supported 00:30:37.029 Persistent Event Log Pages: Not Supported 00:30:37.029 Supported Log Pages Log Page: May Support 00:30:37.029 Commands Supported & Effects Log Page: Not Supported 00:30:37.029 Feature Identifiers & Effects Log Page:May Support 00:30:37.029 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.029 Data Area 4 for Telemetry Log: Not Supported 00:30:37.029 Error Log Page Entries Supported: 1 00:30:37.029 Keep Alive: Not Supported 00:30:37.029 00:30:37.029 NVM Command Set Attributes 00:30:37.029 ========================== 00:30:37.029 Submission Queue Entry Size 00:30:37.029 Max: 64 00:30:37.029 Min: 64 00:30:37.029 Completion Queue Entry Size 00:30:37.029 Max: 16 00:30:37.029 Min: 16 00:30:37.029 Number of Namespaces: 256 00:30:37.029 Compare Command: Supported 00:30:37.029 Write Uncorrectable Command: Not Supported 00:30:37.029 Dataset Management Command: Supported 00:30:37.029 Write Zeroes Command: Supported 00:30:37.029 Set Features Save Field: Supported 00:30:37.029 Reservations: Not Supported 00:30:37.029 Timestamp: Supported 00:30:37.029 Copy: Supported 00:30:37.029 Volatile Write Cache: Present 00:30:37.029 Atomic Write Unit (Normal): 1 00:30:37.029 Atomic Write Unit (PFail): 1 00:30:37.029 Atomic Compare & Write Unit: 1 00:30:37.029 Fused Compare & Write: Not Supported 00:30:37.029 Scatter-Gather List 00:30:37.029 SGL Command Set: Supported 00:30:37.029 SGL Keyed: Not Supported 00:30:37.029 SGL Bit Bucket Descriptor: Not Supported 00:30:37.029 SGL Metadata Pointer: Not Supported 00:30:37.029 Oversized SGL: Not Supported 00:30:37.029 SGL Metadata Address: Not Supported 00:30:37.029 SGL Offset: Not Supported 00:30:37.029 Transport SGL Data Block: Not Supported 00:30:37.029 Replay Protected Memory Block: Not Supported 00:30:37.029 00:30:37.029 Firmware Slot Information 00:30:37.029 ========================= 00:30:37.029 Active slot: 1 00:30:37.029 Slot 1 Firmware Revision: 1.0 00:30:37.029 00:30:37.029 00:30:37.029 Commands Supported and Effects 00:30:37.029 ============================== 00:30:37.029 Admin Commands 00:30:37.029 -------------- 00:30:37.029 Delete I/O Submission Queue (00h): Supported 00:30:37.029 Create I/O Submission Queue (01h): Supported 00:30:37.029 Get Log Page (02h): Supported 00:30:37.029 Delete I/O Completion Queue (04h): Supported 00:30:37.029 Create I/O Completion Queue (05h): Supported 00:30:37.029 Identify (06h): Supported 00:30:37.029 Abort (08h): Supported 00:30:37.029 Set Features (09h): Supported 00:30:37.030 Get Features (0Ah): Supported 00:30:37.030 Asynchronous Event Request (0Ch): Supported 00:30:37.030 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.030 Directive Send (19h): Supported 00:30:37.030 Directive Receive (1Ah): Supported 00:30:37.030 Virtualization Management (1Ch): Supported 00:30:37.030 Doorbell Buffer Config (7Ch): Supported 00:30:37.030 Format NVM (80h): Supported LBA-Change 00:30:37.030 I/O Commands 00:30:37.030 ------------ 00:30:37.030 Flush (00h): Supported LBA-Change 00:30:37.030 Write (01h): Supported LBA-Change 00:30:37.030 Read (02h): Supported 00:30:37.030 Compare (05h): Supported 00:30:37.030 Write Zeroes (08h): Supported LBA-Change 00:30:37.030 Dataset Management (09h): Supported LBA-Change 00:30:37.030 Unknown (0Ch): Supported 00:30:37.030 Unknown (12h): Supported 00:30:37.030 Copy (19h): Supported LBA-Change 00:30:37.030 Unknown (1Dh): Supported LBA-Change 00:30:37.030 00:30:37.030 Error Log 00:30:37.030 ========= 00:30:37.030 00:30:37.030 Arbitration 00:30:37.030 =========== 00:30:37.030 Arbitration Burst: no limit 00:30:37.030 00:30:37.030 Power Management 00:30:37.030 ================ 00:30:37.030 Number of Power States: 1 00:30:37.030 Current Power State: Power State #0 00:30:37.030 Power State #0: 00:30:37.030 Max Power: 25.00 W 00:30:37.030 Non-Operational State: Operational 00:30:37.030 Entry Latency: 16 microseconds 00:30:37.030 Exit Latency: 4 microseconds 00:30:37.030 Relative Read Throughput: 0 00:30:37.030 Relative Read Latency: 0 00:30:37.030 Relative Write Throughput: 0 00:30:37.030 Relative Write Latency: 0 00:30:37.030 Idle Power: Not Reported 00:30:37.030 Active Power: Not Reported 00:30:37.030 Non-Operational Permissive Mode: Not Supported 00:30:37.030 00:30:37.030 Health Information 00:30:37.030 ================== 00:30:37.030 Critical Warnings: 00:30:37.030 Available Spare Space: OK 00:30:37.030 Temperature: [2024-12-09 23:12:17.553674] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62866 terminated unexpected 00:30:37.030 OK 00:30:37.030 Device Reliability: OK 00:30:37.030 Read Only: No 00:30:37.030 Volatile Memory Backup: OK 00:30:37.030 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.030 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.030 Available Spare: 0% 00:30:37.030 Available Spare Threshold: 0% 00:30:37.030 Life Percentage Used: 0% 00:30:37.030 Data Units Read: 1040 00:30:37.030 Data Units Written: 907 00:30:37.030 Host Read Commands: 55114 00:30:37.030 Host Write Commands: 53895 00:30:37.030 Controller Busy Time: 0 minutes 00:30:37.030 Power Cycles: 0 00:30:37.030 Power On Hours: 0 hours 00:30:37.030 Unsafe Shutdowns: 0 00:30:37.030 Unrecoverable Media Errors: 0 00:30:37.030 Lifetime Error Log Entries: 0 00:30:37.030 Warning Temperature Time: 0 minutes 00:30:37.030 Critical Temperature Time: 0 minutes 00:30:37.030 00:30:37.030 Number of Queues 00:30:37.030 ================ 00:30:37.030 Number of I/O Submission Queues: 64 00:30:37.030 Number of I/O Completion Queues: 64 00:30:37.030 00:30:37.030 ZNS Specific Controller Data 00:30:37.030 ============================ 00:30:37.030 Zone Append Size Limit: 0 00:30:37.030 00:30:37.030 00:30:37.030 Active Namespaces 00:30:37.030 ================= 00:30:37.030 Namespace ID:1 00:30:37.030 Error Recovery Timeout: Unlimited 00:30:37.030 Command Set Identifier: NVM (00h) 00:30:37.030 Deallocate: Supported 00:30:37.030 Deallocated/Unwritten Error: Supported 00:30:37.030 Deallocated Read Value: All 0x00 00:30:37.030 Deallocate in Write Zeroes: Not Supported 00:30:37.030 Deallocated Guard Field: 0xFFFF 00:30:37.030 Flush: Supported 00:30:37.030 Reservation: Not Supported 00:30:37.030 Namespace Sharing Capabilities: Private 00:30:37.030 Size (in LBAs): 1310720 (5GiB) 00:30:37.030 Capacity (in LBAs): 1310720 (5GiB) 00:30:37.030 Utilization (in LBAs): 1310720 (5GiB) 00:30:37.030 Thin Provisioning: Not Supported 00:30:37.030 Per-NS Atomic Units: No 00:30:37.030 Maximum Single Source Range Length: 128 00:30:37.030 Maximum Copy Length: 128 00:30:37.030 Maximum Source Range Count: 128 00:30:37.030 NGUID/EUI64 Never Reused: No 00:30:37.030 Namespace Write Protected: No 00:30:37.030 Number of LBA Formats: 8 00:30:37.030 Current LBA Format: LBA Format #04 00:30:37.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.030 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.030 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.030 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.030 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.030 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.030 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.030 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.030 00:30:37.030 NVM Specific Namespace Data 00:30:37.030 =========================== 00:30:37.030 Logical Block Storage Tag Mask: 0 00:30:37.030 Protection Information Capabilities: 00:30:37.030 16b Guard Protection Information Storage Tag Support: No 00:30:37.030 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.030 Storage Tag Check Read Support: No 00:30:37.030 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.030 ===================================================== 00:30:37.030 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:37.030 ===================================================== 00:30:37.030 Controller Capabilities/Features 00:30:37.030 ================================ 00:30:37.030 Vendor ID: 1b36 00:30:37.030 Subsystem Vendor ID: 1af4 00:30:37.030 Serial Number: 12343 00:30:37.030 Model Number: QEMU NVMe Ctrl 00:30:37.030 Firmware Version: 8.0.0 00:30:37.030 Recommended Arb Burst: 6 00:30:37.030 IEEE OUI Identifier: 00 54 52 00:30:37.030 Multi-path I/O 00:30:37.030 May have multiple subsystem ports: No 00:30:37.030 May have multiple controllers: Yes 00:30:37.030 Associated with SR-IOV VF: No 00:30:37.030 Max Data Transfer Size: 524288 00:30:37.030 Max Number of Namespaces: 256 00:30:37.030 Max Number of I/O Queues: 64 00:30:37.030 NVMe Specification Version (VS): 1.4 00:30:37.030 NVMe Specification Version (Identify): 1.4 00:30:37.030 Maximum Queue Entries: 2048 00:30:37.030 Contiguous Queues Required: Yes 00:30:37.030 Arbitration Mechanisms Supported 00:30:37.030 Weighted Round Robin: Not Supported 00:30:37.030 Vendor Specific: Not Supported 00:30:37.030 Reset Timeout: 7500 ms 00:30:37.030 Doorbell Stride: 4 bytes 00:30:37.030 NVM Subsystem Reset: Not Supported 00:30:37.030 Command Sets Supported 00:30:37.030 NVM Command Set: Supported 00:30:37.030 Boot Partition: Not Supported 00:30:37.030 Memory Page Size Minimum: 4096 bytes 00:30:37.030 Memory Page Size Maximum: 65536 bytes 00:30:37.030 Persistent Memory Region: Not Supported 00:30:37.031 Optional Asynchronous Events Supported 00:30:37.031 Namespace Attribute Notices: Supported 00:30:37.031 Firmware Activation Notices: Not Supported 00:30:37.031 ANA Change Notices: Not Supported 00:30:37.031 PLE Aggregate Log Change Notices: Not Supported 00:30:37.031 LBA Status Info Alert Notices: Not Supported 00:30:37.031 EGE Aggregate Log Change Notices: Not Supported 00:30:37.031 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.031 Zone Descriptor Change Notices: Not Supported 00:30:37.031 Discovery Log Change Notices: Not Supported 00:30:37.031 Controller Attributes 00:30:37.031 128-bit Host Identifier: Not Supported 00:30:37.031 Non-Operational Permissive Mode: Not Supported 00:30:37.031 NVM Sets: Not Supported 00:30:37.031 Read Recovery Levels: Not Supported 00:30:37.031 Endurance Groups: Supported 00:30:37.031 Predictable Latency Mode: Not Supported 00:30:37.031 Traffic Based Keep ALive: Not Supported 00:30:37.031 Namespace Granularity: Not Supported 00:30:37.031 SQ Associations: Not Supported 00:30:37.031 UUID List: Not Supported 00:30:37.031 Multi-Domain Subsystem: Not Supported 00:30:37.031 Fixed Capacity Management: Not Supported 00:30:37.031 Variable Capacity Management: Not Supported 00:30:37.031 Delete Endurance Group: Not Supported 00:30:37.031 Delete NVM Set: Not Supported 00:30:37.031 Extended LBA Formats Supported: Supported 00:30:37.031 Flexible Data Placement Supported: Supported 00:30:37.031 00:30:37.031 Controller Memory Buffer Support 00:30:37.031 ================================ 00:30:37.031 Supported: No 00:30:37.031 00:30:37.031 Persistent Memory Region Support 00:30:37.031 ================================ 00:30:37.031 Supported: No 00:30:37.031 00:30:37.031 Admin Command Set Attributes 00:30:37.031 ============================ 00:30:37.031 Security Send/Receive: Not Supported 00:30:37.031 Format NVM: Supported 00:30:37.031 Firmware Activate/Download: Not Supported 00:30:37.031 Namespace Management: Supported 00:30:37.031 Device Self-Test: Not Supported 00:30:37.031 Directives: Supported 00:30:37.031 NVMe-MI: Not Supported 00:30:37.031 Virtualization Management: Not Supported 00:30:37.031 Doorbell Buffer Config: Supported 00:30:37.031 Get LBA Status Capability: Not Supported 00:30:37.031 Command & Feature Lockdown Capability: Not Supported 00:30:37.031 Abort Command Limit: 4 00:30:37.031 Async Event Request Limit: 4 00:30:37.031 Number of Firmware Slots: N/A 00:30:37.031 Firmware Slot 1 Read-Only: N/A 00:30:37.031 Firmware Activation Without Reset: N/A 00:30:37.031 Multiple Update Detection Support: N/A 00:30:37.031 Firmware Update Granularity: No Information Provided 00:30:37.031 Per-Namespace SMART Log: Yes 00:30:37.031 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.031 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:37.031 Command Effects Log Page: Supported 00:30:37.031 Get Log Page Extended Data: Supported 00:30:37.031 Telemetry Log Pages: Not Supported 00:30:37.031 Persistent Event Log Pages: Not Supported 00:30:37.031 Supported Log Pages Log Page: May Support 00:30:37.031 Commands Supported & Effects Log Page: Not Supported 00:30:37.031 Feature Identifiers & Effects Log Page:May Support 00:30:37.031 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.031 Data Area 4 for Telemetry Log: Not Supported 00:30:37.031 Error Log Page Entries Supported: 1 00:30:37.031 Keep Alive: Not Supported 00:30:37.031 00:30:37.031 NVM Command Set Attributes 00:30:37.031 ========================== 00:30:37.031 Submission Queue Entry Size 00:30:37.031 Max: 64 00:30:37.031 Min: 64 00:30:37.031 Completion Queue Entry Size 00:30:37.031 Max: 16 00:30:37.031 Min: 16 00:30:37.031 Number of Namespaces: 256 00:30:37.031 Compare Command: Supported 00:30:37.031 Write Uncorrectable Command: Not Supported 00:30:37.031 Dataset Management Command: Supported 00:30:37.031 Write Zeroes Command: Supported 00:30:37.031 Set Features Save Field: Supported 00:30:37.031 Reservations: Not Supported 00:30:37.031 Timestamp: Supported 00:30:37.031 Copy: Supported 00:30:37.031 Volatile Write Cache: Present 00:30:37.031 Atomic Write Unit (Normal): 1 00:30:37.031 Atomic Write Unit (PFail): 1 00:30:37.031 Atomic Compare & Write Unit: 1 00:30:37.031 Fused Compare & Write: Not Supported 00:30:37.031 Scatter-Gather List 00:30:37.031 SGL Command Set: Supported 00:30:37.031 SGL Keyed: Not Supported 00:30:37.031 SGL Bit Bucket Descriptor: Not Supported 00:30:37.031 SGL Metadata Pointer: Not Supported 00:30:37.031 Oversized SGL: Not Supported 00:30:37.031 SGL Metadata Address: Not Supported 00:30:37.031 SGL Offset: Not Supported 00:30:37.031 Transport SGL Data Block: Not Supported 00:30:37.031 Replay Protected Memory Block: Not Supported 00:30:37.031 00:30:37.031 Firmware Slot Information 00:30:37.031 ========================= 00:30:37.031 Active slot: 1 00:30:37.031 Slot 1 Firmware Revision: 1.0 00:30:37.031 00:30:37.031 00:30:37.031 Commands Supported and Effects 00:30:37.031 ============================== 00:30:37.031 Admin Commands 00:30:37.031 -------------- 00:30:37.031 Delete I/O Submission Queue (00h): Supported 00:30:37.031 Create I/O Submission Queue (01h): Supported 00:30:37.031 Get Log Page (02h): Supported 00:30:37.031 Delete I/O Completion Queue (04h): Supported 00:30:37.031 Create I/O Completion Queue (05h): Supported 00:30:37.031 Identify (06h): Supported 00:30:37.031 Abort (08h): Supported 00:30:37.031 Set Features (09h): Supported 00:30:37.031 Get Features (0Ah): Supported 00:30:37.031 Asynchronous Event Request (0Ch): Supported 00:30:37.031 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.031 Directive Send (19h): Supported 00:30:37.031 Directive Receive (1Ah): Supported 00:30:37.031 Virtualization Management (1Ch): Supported 00:30:37.031 Doorbell Buffer Config (7Ch): Supported 00:30:37.031 Format NVM (80h): Supported LBA-Change 00:30:37.031 I/O Commands 00:30:37.031 ------------ 00:30:37.031 Flush (00h): Supported LBA-Change 00:30:37.031 Write (01h): Supported LBA-Change 00:30:37.031 Read (02h): Supported 00:30:37.031 Compare (05h): Supported 00:30:37.031 Write Zeroes (08h): Supported LBA-Change 00:30:37.031 Dataset Management (09h): Supported LBA-Change 00:30:37.031 Unknown (0Ch): Supported 00:30:37.031 Unknown (12h): Supported 00:30:37.031 Copy (19h): Supported LBA-Change 00:30:37.031 Unknown (1Dh): Supported LBA-Change 00:30:37.031 00:30:37.031 Error Log 00:30:37.031 ========= 00:30:37.031 00:30:37.031 Arbitration 00:30:37.031 =========== 00:30:37.031 Arbitration Burst: no limit 00:30:37.031 00:30:37.031 Power Management 00:30:37.031 ================ 00:30:37.031 Number of Power States: 1 00:30:37.031 Current Power State: Power State #0 00:30:37.031 Power State #0: 00:30:37.031 Max Power: 25.00 W 00:30:37.031 Non-Operational State: Operational 00:30:37.031 Entry Latency: 16 microseconds 00:30:37.031 Exit Latency: 4 microseconds 00:30:37.031 Relative Read Throughput: 0 00:30:37.031 Relative Read Latency: 0 00:30:37.031 Relative Write Throughput: 0 00:30:37.031 Relative Write Latency: 0 00:30:37.031 Idle Power: Not Reported 00:30:37.031 Active Power: Not Reported 00:30:37.031 Non-Operational Permissive Mode: Not Supported 00:30:37.031 00:30:37.031 Health Information 00:30:37.031 ================== 00:30:37.031 Critical Warnings: 00:30:37.031 Available Spare Space: OK 00:30:37.031 Temperature: OK 00:30:37.031 Device Reliability: OK 00:30:37.031 Read Only: No 00:30:37.031 Volatile Memory Backup: OK 00:30:37.031 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.031 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.031 Available Spare: 0% 00:30:37.031 Available Spare Threshold: 0% 00:30:37.031 Life Percentage Used: 0% 00:30:37.031 Data Units Read: 803 00:30:37.031 Data Units Written: 732 00:30:37.031 Host Read Commands: 38090 00:30:37.031 Host Write Commands: 37515 00:30:37.031 Controller Busy Time: 0 minutes 00:30:37.031 Power Cycles: 0 00:30:37.031 Power On Hours: 0 hours 00:30:37.031 Unsafe Shutdowns: 0 00:30:37.031 Unrecoverable Media Errors: 0 00:30:37.031 Lifetime Error Log Entries: 0 00:30:37.031 Warning Temperature Time: 0 minutes 00:30:37.031 Critical Temperature Time: 0 minutes 00:30:37.031 00:30:37.032 Number of Queues 00:30:37.032 ================ 00:30:37.032 Number of I/O Submission Queues: 64 00:30:37.032 Number of I/O Completion Queues: 64 00:30:37.032 00:30:37.032 ZNS Specific Controller Data 00:30:37.032 ============================ 00:30:37.032 Zone Append Size Limit: 0 00:30:37.032 00:30:37.032 00:30:37.032 Active Namespaces 00:30:37.032 ================= 00:30:37.032 Namespace ID:1 00:30:37.032 Error Recovery Timeout: Unlimited 00:30:37.032 Command Set Identifier: NVM (00h) 00:30:37.032 Deallocate: Supported 00:30:37.032 Deallocated/Unwritten Error: Supported 00:30:37.032 Deallocated Read Value: All 0x00 00:30:37.032 Deallocate in Write Zeroes: Not Supported 00:30:37.032 Deallocated Guard Field: 0xFFFF 00:30:37.032 Flush: Supported 00:30:37.032 Reservation: Not Supported 00:30:37.032 Namespace Sharing Capabilities: Multiple Controllers 00:30:37.032 Size (in LBAs): 262144 (1GiB) 00:30:37.032 Capacity (in LBAs): 262144 (1GiB) 00:30:37.032 Utilization (in LBAs): 262144 (1GiB) 00:30:37.032 Thin Provisioning: Not Supported 00:30:37.032 Per-NS Atomic Units: No 00:30:37.032 Maximum Single Source Range Length: 128 00:30:37.032 Maximum Copy Length: 128 00:30:37.032 Maximum Source Range Count: 128 00:30:37.032 NGUID/EUI64 Never Reused: No 00:30:37.032 Namespace Write Protected: No 00:30:37.032 Endurance group ID: 1 00:30:37.032 Number of LBA Formats: 8 00:30:37.032 Current LBA Format: LBA Format #04 00:30:37.032 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.032 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.032 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.032 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.032 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.032 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.032 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.032 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.032 00:30:37.032 Get Feature FDP: 00:30:37.032 ================ 00:30:37.032 Enabled: Yes 00:30:37.032 FDP configuration index: 0 00:30:37.032 00:30:37.032 FDP configurations log page 00:30:37.032 =========================== 00:30:37.032 Number of FDP configurations: 1 00:30:37.032 Version: 0 00:30:37.032 Size: 112 00:30:37.032 FDP Configuration Descriptor: 0 00:30:37.032 Descriptor Size: 96 00:30:37.032 Reclaim Group Identifier format: 2 00:30:37.032 FDP Volatile Write Cache: Not Present 00:30:37.032 FDP Configuration: Valid 00:30:37.032 Vendor Specific Size: 0 00:30:37.032 Number of Reclaim Groups: 2 00:30:37.032 Number of Recalim Unit Handles: 8 00:30:37.032 Max Placement Identifiers: 128 00:30:37.032 Number of Namespaces Suppprted: 256 00:30:37.032 Reclaim unit Nominal Size: 6000000 bytes 00:30:37.032 Estimated Reclaim Unit Time Limit: Not Reported 00:30:37.032 RUH Desc #000: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #001: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #002: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #003: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #004: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #005: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #006: RUH Type: Initially Isolated 00:30:37.032 RUH Desc #007: RUH Type: Initially Isolated 00:30:37.032 00:30:37.032 FDP reclaim unit handle usage log page 00:30:37.032 ====================================== 00:30:37.032 Number of Reclaim Unit Handles: 8 00:30:37.032 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:37.032 RUH Usage Desc #001: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #002: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #003: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #004: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #005: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #006: RUH Attributes: Unused 00:30:37.032 RUH Usage Desc #007: RUH Attributes: Unused 00:30:37.032 00:30:37.032 FDP statistics log page 00:30:37.032 ======================= 00:30:37.032 Host bytes with metadata written: 469934080 00:30:37.032 Medi[2024-12-09 23:12:17.554861] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62866 terminated unexpected 00:30:37.032 a bytes with metadata written: 469987328 00:30:37.032 Media bytes erased: 0 00:30:37.032 00:30:37.032 FDP events log page 00:30:37.032 =================== 00:30:37.032 Number of FDP events: 0 00:30:37.032 00:30:37.032 NVM Specific Namespace Data 00:30:37.032 =========================== 00:30:37.032 Logical Block Storage Tag Mask: 0 00:30:37.032 Protection Information Capabilities: 00:30:37.032 16b Guard Protection Information Storage Tag Support: No 00:30:37.032 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.032 Storage Tag Check Read Support: No 00:30:37.032 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.032 ===================================================== 00:30:37.032 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:37.032 ===================================================== 00:30:37.032 Controller Capabilities/Features 00:30:37.032 ================================ 00:30:37.032 Vendor ID: 1b36 00:30:37.032 Subsystem Vendor ID: 1af4 00:30:37.032 Serial Number: 12342 00:30:37.032 Model Number: QEMU NVMe Ctrl 00:30:37.032 Firmware Version: 8.0.0 00:30:37.032 Recommended Arb Burst: 6 00:30:37.032 IEEE OUI Identifier: 00 54 52 00:30:37.032 Multi-path I/O 00:30:37.032 May have multiple subsystem ports: No 00:30:37.032 May have multiple controllers: No 00:30:37.032 Associated with SR-IOV VF: No 00:30:37.032 Max Data Transfer Size: 524288 00:30:37.032 Max Number of Namespaces: 256 00:30:37.032 Max Number of I/O Queues: 64 00:30:37.032 NVMe Specification Version (VS): 1.4 00:30:37.032 NVMe Specification Version (Identify): 1.4 00:30:37.032 Maximum Queue Entries: 2048 00:30:37.032 Contiguous Queues Required: Yes 00:30:37.032 Arbitration Mechanisms Supported 00:30:37.032 Weighted Round Robin: Not Supported 00:30:37.032 Vendor Specific: Not Supported 00:30:37.032 Reset Timeout: 7500 ms 00:30:37.032 Doorbell Stride: 4 bytes 00:30:37.032 NVM Subsystem Reset: Not Supported 00:30:37.032 Command Sets Supported 00:30:37.032 NVM Command Set: Supported 00:30:37.032 Boot Partition: Not Supported 00:30:37.033 Memory Page Size Minimum: 4096 bytes 00:30:37.033 Memory Page Size Maximum: 65536 bytes 00:30:37.033 Persistent Memory Region: Not Supported 00:30:37.033 Optional Asynchronous Events Supported 00:30:37.033 Namespace Attribute Notices: Supported 00:30:37.033 Firmware Activation Notices: Not Supported 00:30:37.033 ANA Change Notices: Not Supported 00:30:37.033 PLE Aggregate Log Change Notices: Not Supported 00:30:37.033 LBA Status Info Alert Notices: Not Supported 00:30:37.033 EGE Aggregate Log Change Notices: Not Supported 00:30:37.033 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.033 Zone Descriptor Change Notices: Not Supported 00:30:37.033 Discovery Log Change Notices: Not Supported 00:30:37.033 Controller Attributes 00:30:37.033 128-bit Host Identifier: Not Supported 00:30:37.033 Non-Operational Permissive Mode: Not Supported 00:30:37.033 NVM Sets: Not Supported 00:30:37.033 Read Recovery Levels: Not Supported 00:30:37.033 Endurance Groups: Not Supported 00:30:37.033 Predictable Latency Mode: Not Supported 00:30:37.033 Traffic Based Keep ALive: Not Supported 00:30:37.033 Namespace Granularity: Not Supported 00:30:37.033 SQ Associations: Not Supported 00:30:37.033 UUID List: Not Supported 00:30:37.033 Multi-Domain Subsystem: Not Supported 00:30:37.033 Fixed Capacity Management: Not Supported 00:30:37.033 Variable Capacity Management: Not Supported 00:30:37.033 Delete Endurance Group: Not Supported 00:30:37.033 Delete NVM Set: Not Supported 00:30:37.033 Extended LBA Formats Supported: Supported 00:30:37.033 Flexible Data Placement Supported: Not Supported 00:30:37.033 00:30:37.033 Controller Memory Buffer Support 00:30:37.033 ================================ 00:30:37.033 Supported: No 00:30:37.033 00:30:37.033 Persistent Memory Region Support 00:30:37.033 ================================ 00:30:37.033 Supported: No 00:30:37.033 00:30:37.033 Admin Command Set Attributes 00:30:37.033 ============================ 00:30:37.033 Security Send/Receive: Not Supported 00:30:37.033 Format NVM: Supported 00:30:37.033 Firmware Activate/Download: Not Supported 00:30:37.033 Namespace Management: Supported 00:30:37.033 Device Self-Test: Not Supported 00:30:37.033 Directives: Supported 00:30:37.033 NVMe-MI: Not Supported 00:30:37.033 Virtualization Management: Not Supported 00:30:37.033 Doorbell Buffer Config: Supported 00:30:37.033 Get LBA Status Capability: Not Supported 00:30:37.033 Command & Feature Lockdown Capability: Not Supported 00:30:37.033 Abort Command Limit: 4 00:30:37.033 Async Event Request Limit: 4 00:30:37.033 Number of Firmware Slots: N/A 00:30:37.033 Firmware Slot 1 Read-Only: N/A 00:30:37.033 Firmware Activation Without Reset: N/A 00:30:37.033 Multiple Update Detection Support: N/A 00:30:37.033 Firmware Update Granularity: No Information Provided 00:30:37.033 Per-Namespace SMART Log: Yes 00:30:37.033 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.033 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:37.033 Command Effects Log Page: Supported 00:30:37.033 Get Log Page Extended Data: Supported 00:30:37.033 Telemetry Log Pages: Not Supported 00:30:37.033 Persistent Event Log Pages: Not Supported 00:30:37.033 Supported Log Pages Log Page: May Support 00:30:37.033 Commands Supported & Effects Log Page: Not Supported 00:30:37.033 Feature Identifiers & Effects Log Page:May Support 00:30:37.033 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.033 Data Area 4 for Telemetry Log: Not Supported 00:30:37.033 Error Log Page Entries Supported: 1 00:30:37.033 Keep Alive: Not Supported 00:30:37.033 00:30:37.033 NVM Command Set Attributes 00:30:37.033 ========================== 00:30:37.033 Submission Queue Entry Size 00:30:37.033 Max: 64 00:30:37.033 Min: 64 00:30:37.033 Completion Queue Entry Size 00:30:37.033 Max: 16 00:30:37.033 Min: 16 00:30:37.033 Number of Namespaces: 256 00:30:37.033 Compare Command: Supported 00:30:37.033 Write Uncorrectable Command: Not Supported 00:30:37.033 Dataset Management Command: Supported 00:30:37.033 Write Zeroes Command: Supported 00:30:37.033 Set Features Save Field: Supported 00:30:37.033 Reservations: Not Supported 00:30:37.033 Timestamp: Supported 00:30:37.033 Copy: Supported 00:30:37.033 Volatile Write Cache: Present 00:30:37.033 Atomic Write Unit (Normal): 1 00:30:37.033 Atomic Write Unit (PFail): 1 00:30:37.033 Atomic Compare & Write Unit: 1 00:30:37.033 Fused Compare & Write: Not Supported 00:30:37.033 Scatter-Gather List 00:30:37.033 SGL Command Set: Supported 00:30:37.033 SGL Keyed: Not Supported 00:30:37.033 SGL Bit Bucket Descriptor: Not Supported 00:30:37.033 SGL Metadata Pointer: Not Supported 00:30:37.033 Oversized SGL: Not Supported 00:30:37.033 SGL Metadata Address: Not Supported 00:30:37.033 SGL Offset: Not Supported 00:30:37.033 Transport SGL Data Block: Not Supported 00:30:37.033 Replay Protected Memory Block: Not Supported 00:30:37.033 00:30:37.033 Firmware Slot Information 00:30:37.033 ========================= 00:30:37.033 Active slot: 1 00:30:37.033 Slot 1 Firmware Revision: 1.0 00:30:37.033 00:30:37.033 00:30:37.033 Commands Supported and Effects 00:30:37.033 ============================== 00:30:37.033 Admin Commands 00:30:37.033 -------------- 00:30:37.033 Delete I/O Submission Queue (00h): Supported 00:30:37.033 Create I/O Submission Queue (01h): Supported 00:30:37.033 Get Log Page (02h): Supported 00:30:37.033 Delete I/O Completion Queue (04h): Supported 00:30:37.033 Create I/O Completion Queue (05h): Supported 00:30:37.033 Identify (06h): Supported 00:30:37.033 Abort (08h): Supported 00:30:37.033 Set Features (09h): Supported 00:30:37.033 Get Features (0Ah): Supported 00:30:37.033 Asynchronous Event Request (0Ch): Supported 00:30:37.033 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.033 Directive Send (19h): Supported 00:30:37.033 Directive Receive (1Ah): Supported 00:30:37.033 Virtualization Management (1Ch): Supported 00:30:37.033 Doorbell Buffer Config (7Ch): Supported 00:30:37.033 Format NVM (80h): Supported LBA-Change 00:30:37.033 I/O Commands 00:30:37.033 ------------ 00:30:37.033 Flush (00h): Supported LBA-Change 00:30:37.033 Write (01h): Supported LBA-Change 00:30:37.033 Read (02h): Supported 00:30:37.033 Compare (05h): Supported 00:30:37.033 Write Zeroes (08h): Supported LBA-Change 00:30:37.033 Dataset Management (09h): Supported LBA-Change 00:30:37.033 Unknown (0Ch): Supported 00:30:37.033 Unknown (12h): Supported 00:30:37.033 Copy (19h): Supported LBA-Change 00:30:37.033 Unknown (1Dh): Supported LBA-Change 00:30:37.033 00:30:37.033 Error Log 00:30:37.033 ========= 00:30:37.033 00:30:37.033 Arbitration 00:30:37.033 =========== 00:30:37.033 Arbitration Burst: no limit 00:30:37.033 00:30:37.033 Power Management 00:30:37.033 ================ 00:30:37.033 Number of Power States: 1 00:30:37.033 Current Power State: Power State #0 00:30:37.033 Power State #0: 00:30:37.033 Max Power: 25.00 W 00:30:37.033 Non-Operational State: Operational 00:30:37.033 Entry Latency: 16 microseconds 00:30:37.033 Exit Latency: 4 microseconds 00:30:37.033 Relative Read Throughput: 0 00:30:37.033 Relative Read Latency: 0 00:30:37.033 Relative Write Throughput: 0 00:30:37.033 Relative Write Latency: 0 00:30:37.033 Idle Power: Not Reported 00:30:37.033 Active Power: Not Reported 00:30:37.033 Non-Operational Permissive Mode: Not Supported 00:30:37.033 00:30:37.033 Health Information 00:30:37.033 ================== 00:30:37.033 Critical Warnings: 00:30:37.033 Available Spare Space: OK 00:30:37.033 Temperature: OK 00:30:37.033 Device Reliability: OK 00:30:37.033 Read Only: No 00:30:37.033 Volatile Memory Backup: OK 00:30:37.033 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.033 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.033 Available Spare: 0% 00:30:37.033 Available Spare Threshold: 0% 00:30:37.033 Life Percentage Used: 0% 00:30:37.033 Data Units Read: 2189 00:30:37.033 Data Units Written: 1976 00:30:37.033 Host Read Commands: 112558 00:30:37.033 Host Write Commands: 110828 00:30:37.033 Controller Busy Time: 0 minutes 00:30:37.033 Power Cycles: 0 00:30:37.033 Power On Hours: 0 hours 00:30:37.033 Unsafe Shutdowns: 0 00:30:37.033 Unrecoverable Media Errors: 0 00:30:37.033 Lifetime Error Log Entries: 0 00:30:37.033 Warning Temperature Time: 0 minutes 00:30:37.033 Critical Temperature Time: 0 minutes 00:30:37.033 00:30:37.033 Number of Queues 00:30:37.033 ================ 00:30:37.033 Number of I/O Submission Queues: 64 00:30:37.033 Number of I/O Completion Queues: 64 00:30:37.033 00:30:37.033 ZNS Specific Controller Data 00:30:37.033 ============================ 00:30:37.034 Zone Append Size Limit: 0 00:30:37.034 00:30:37.034 00:30:37.034 Active Namespaces 00:30:37.034 ================= 00:30:37.034 Namespace ID:1 00:30:37.034 Error Recovery Timeout: Unlimited 00:30:37.034 Command Set Identifier: NVM (00h) 00:30:37.034 Deallocate: Supported 00:30:37.034 Deallocated/Unwritten Error: Supported 00:30:37.034 Deallocated Read Value: All 0x00 00:30:37.034 Deallocate in Write Zeroes: Not Supported 00:30:37.034 Deallocated Guard Field: 0xFFFF 00:30:37.034 Flush: Supported 00:30:37.034 Reservation: Not Supported 00:30:37.034 Namespace Sharing Capabilities: Private 00:30:37.034 Size (in LBAs): 1048576 (4GiB) 00:30:37.034 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.034 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.034 Thin Provisioning: Not Supported 00:30:37.034 Per-NS Atomic Units: No 00:30:37.034 Maximum Single Source Range Length: 128 00:30:37.034 Maximum Copy Length: 128 00:30:37.034 Maximum Source Range Count: 128 00:30:37.034 NGUID/EUI64 Never Reused: No 00:30:37.034 Namespace Write Protected: No 00:30:37.034 Number of LBA Formats: 8 00:30:37.034 Current LBA Format: LBA Format #04 00:30:37.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.034 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.034 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.034 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.034 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.034 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.034 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.034 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.034 00:30:37.034 NVM Specific Namespace Data 00:30:37.034 =========================== 00:30:37.034 Logical Block Storage Tag Mask: 0 00:30:37.034 Protection Information Capabilities: 00:30:37.034 16b Guard Protection Information Storage Tag Support: No 00:30:37.034 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.034 Storage Tag Check Read Support: No 00:30:37.034 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Namespace ID:2 00:30:37.034 Error Recovery Timeout: Unlimited 00:30:37.034 Command Set Identifier: NVM (00h) 00:30:37.034 Deallocate: Supported 00:30:37.034 Deallocated/Unwritten Error: Supported 00:30:37.034 Deallocated Read Value: All 0x00 00:30:37.034 Deallocate in Write Zeroes: Not Supported 00:30:37.034 Deallocated Guard Field: 0xFFFF 00:30:37.034 Flush: Supported 00:30:37.034 Reservation: Not Supported 00:30:37.034 Namespace Sharing Capabilities: Private 00:30:37.034 Size (in LBAs): 1048576 (4GiB) 00:30:37.034 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.034 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.034 Thin Provisioning: Not Supported 00:30:37.034 Per-NS Atomic Units: No 00:30:37.034 Maximum Single Source Range Length: 128 00:30:37.034 Maximum Copy Length: 128 00:30:37.034 Maximum Source Range Count: 128 00:30:37.034 NGUID/EUI64 Never Reused: No 00:30:37.034 Namespace Write Protected: No 00:30:37.034 Number of LBA Formats: 8 00:30:37.034 Current LBA Format: LBA Format #04 00:30:37.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.034 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.034 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.034 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.034 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.034 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.034 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.034 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.034 00:30:37.034 NVM Specific Namespace Data 00:30:37.034 =========================== 00:30:37.034 Logical Block Storage Tag Mask: 0 00:30:37.034 Protection Information Capabilities: 00:30:37.034 16b Guard Protection Information Storage Tag Support: No 00:30:37.034 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.034 Storage Tag Check Read Support: No 00:30:37.034 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Namespace ID:3 00:30:37.034 Error Recovery Timeout: Unlimited 00:30:37.034 Command Set Identifier: NVM (00h) 00:30:37.034 Deallocate: Supported 00:30:37.034 Deallocated/Unwritten Error: Supported 00:30:37.034 Deallocated Read Value: All 0x00 00:30:37.034 Deallocate in Write Zeroes: Not Supported 00:30:37.034 Deallocated Guard Field: 0xFFFF 00:30:37.034 Flush: Supported 00:30:37.034 Reservation: Not Supported 00:30:37.034 Namespace Sharing Capabilities: Private 00:30:37.034 Size (in LBAs): 1048576 (4GiB) 00:30:37.034 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.034 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.034 Thin Provisioning: Not Supported 00:30:37.034 Per-NS Atomic Units: No 00:30:37.034 Maximum Single Source Range Length: 128 00:30:37.034 Maximum Copy Length: 128 00:30:37.034 Maximum Source Range Count: 128 00:30:37.034 NGUID/EUI64 Never Reused: No 00:30:37.034 Namespace Write Protected: No 00:30:37.034 Number of LBA Formats: 8 00:30:37.034 Current LBA Format: LBA Format #04 00:30:37.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.034 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.034 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.034 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.034 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.034 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.034 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.034 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.034 00:30:37.034 NVM Specific Namespace Data 00:30:37.034 =========================== 00:30:37.034 Logical Block Storage Tag Mask: 0 00:30:37.034 Protection Information Capabilities: 00:30:37.034 16b Guard Protection Information Storage Tag Support: No 00:30:37.034 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.034 Storage Tag Check Read Support: No 00:30:37.034 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.034 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:37.034 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:37.294 ===================================================== 00:30:37.294 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:37.294 ===================================================== 00:30:37.294 Controller Capabilities/Features 00:30:37.294 ================================ 00:30:37.294 Vendor ID: 1b36 00:30:37.294 Subsystem Vendor ID: 1af4 00:30:37.294 Serial Number: 12340 00:30:37.294 Model Number: QEMU NVMe Ctrl 00:30:37.294 Firmware Version: 8.0.0 00:30:37.294 Recommended Arb Burst: 6 00:30:37.294 IEEE OUI Identifier: 00 54 52 00:30:37.294 Multi-path I/O 00:30:37.294 May have multiple subsystem ports: No 00:30:37.294 May have multiple controllers: No 00:30:37.294 Associated with SR-IOV VF: No 00:30:37.294 Max Data Transfer Size: 524288 00:30:37.294 Max Number of Namespaces: 256 00:30:37.294 Max Number of I/O Queues: 64 00:30:37.294 NVMe Specification Version (VS): 1.4 00:30:37.294 NVMe Specification Version (Identify): 1.4 00:30:37.294 Maximum Queue Entries: 2048 00:30:37.294 Contiguous Queues Required: Yes 00:30:37.294 Arbitration Mechanisms Supported 00:30:37.294 Weighted Round Robin: Not Supported 00:30:37.294 Vendor Specific: Not Supported 00:30:37.294 Reset Timeout: 7500 ms 00:30:37.294 Doorbell Stride: 4 bytes 00:30:37.294 NVM Subsystem Reset: Not Supported 00:30:37.294 Command Sets Supported 00:30:37.294 NVM Command Set: Supported 00:30:37.294 Boot Partition: Not Supported 00:30:37.294 Memory Page Size Minimum: 4096 bytes 00:30:37.294 Memory Page Size Maximum: 65536 bytes 00:30:37.294 Persistent Memory Region: Not Supported 00:30:37.294 Optional Asynchronous Events Supported 00:30:37.294 Namespace Attribute Notices: Supported 00:30:37.294 Firmware Activation Notices: Not Supported 00:30:37.294 ANA Change Notices: Not Supported 00:30:37.294 PLE Aggregate Log Change Notices: Not Supported 00:30:37.294 LBA Status Info Alert Notices: Not Supported 00:30:37.294 EGE Aggregate Log Change Notices: Not Supported 00:30:37.294 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.294 Zone Descriptor Change Notices: Not Supported 00:30:37.294 Discovery Log Change Notices: Not Supported 00:30:37.294 Controller Attributes 00:30:37.294 128-bit Host Identifier: Not Supported 00:30:37.294 Non-Operational Permissive Mode: Not Supported 00:30:37.294 NVM Sets: Not Supported 00:30:37.294 Read Recovery Levels: Not Supported 00:30:37.294 Endurance Groups: Not Supported 00:30:37.294 Predictable Latency Mode: Not Supported 00:30:37.294 Traffic Based Keep ALive: Not Supported 00:30:37.294 Namespace Granularity: Not Supported 00:30:37.294 SQ Associations: Not Supported 00:30:37.294 UUID List: Not Supported 00:30:37.294 Multi-Domain Subsystem: Not Supported 00:30:37.294 Fixed Capacity Management: Not Supported 00:30:37.294 Variable Capacity Management: Not Supported 00:30:37.294 Delete Endurance Group: Not Supported 00:30:37.294 Delete NVM Set: Not Supported 00:30:37.294 Extended LBA Formats Supported: Supported 00:30:37.294 Flexible Data Placement Supported: Not Supported 00:30:37.294 00:30:37.294 Controller Memory Buffer Support 00:30:37.294 ================================ 00:30:37.294 Supported: No 00:30:37.294 00:30:37.294 Persistent Memory Region Support 00:30:37.294 ================================ 00:30:37.294 Supported: No 00:30:37.294 00:30:37.294 Admin Command Set Attributes 00:30:37.294 ============================ 00:30:37.294 Security Send/Receive: Not Supported 00:30:37.294 Format NVM: Supported 00:30:37.294 Firmware Activate/Download: Not Supported 00:30:37.294 Namespace Management: Supported 00:30:37.294 Device Self-Test: Not Supported 00:30:37.294 Directives: Supported 00:30:37.294 NVMe-MI: Not Supported 00:30:37.294 Virtualization Management: Not Supported 00:30:37.294 Doorbell Buffer Config: Supported 00:30:37.294 Get LBA Status Capability: Not Supported 00:30:37.294 Command & Feature Lockdown Capability: Not Supported 00:30:37.294 Abort Command Limit: 4 00:30:37.294 Async Event Request Limit: 4 00:30:37.294 Number of Firmware Slots: N/A 00:30:37.294 Firmware Slot 1 Read-Only: N/A 00:30:37.294 Firmware Activation Without Reset: N/A 00:30:37.294 Multiple Update Detection Support: N/A 00:30:37.294 Firmware Update Granularity: No Information Provided 00:30:37.294 Per-Namespace SMART Log: Yes 00:30:37.294 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.294 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:37.294 Command Effects Log Page: Supported 00:30:37.294 Get Log Page Extended Data: Supported 00:30:37.294 Telemetry Log Pages: Not Supported 00:30:37.294 Persistent Event Log Pages: Not Supported 00:30:37.294 Supported Log Pages Log Page: May Support 00:30:37.294 Commands Supported & Effects Log Page: Not Supported 00:30:37.294 Feature Identifiers & Effects Log Page:May Support 00:30:37.294 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.294 Data Area 4 for Telemetry Log: Not Supported 00:30:37.294 Error Log Page Entries Supported: 1 00:30:37.294 Keep Alive: Not Supported 00:30:37.294 00:30:37.294 NVM Command Set Attributes 00:30:37.294 ========================== 00:30:37.294 Submission Queue Entry Size 00:30:37.294 Max: 64 00:30:37.294 Min: 64 00:30:37.294 Completion Queue Entry Size 00:30:37.294 Max: 16 00:30:37.294 Min: 16 00:30:37.294 Number of Namespaces: 256 00:30:37.294 Compare Command: Supported 00:30:37.294 Write Uncorrectable Command: Not Supported 00:30:37.295 Dataset Management Command: Supported 00:30:37.295 Write Zeroes Command: Supported 00:30:37.295 Set Features Save Field: Supported 00:30:37.295 Reservations: Not Supported 00:30:37.295 Timestamp: Supported 00:30:37.295 Copy: Supported 00:30:37.295 Volatile Write Cache: Present 00:30:37.295 Atomic Write Unit (Normal): 1 00:30:37.295 Atomic Write Unit (PFail): 1 00:30:37.295 Atomic Compare & Write Unit: 1 00:30:37.295 Fused Compare & Write: Not Supported 00:30:37.295 Scatter-Gather List 00:30:37.295 SGL Command Set: Supported 00:30:37.295 SGL Keyed: Not Supported 00:30:37.295 SGL Bit Bucket Descriptor: Not Supported 00:30:37.295 SGL Metadata Pointer: Not Supported 00:30:37.295 Oversized SGL: Not Supported 00:30:37.295 SGL Metadata Address: Not Supported 00:30:37.295 SGL Offset: Not Supported 00:30:37.295 Transport SGL Data Block: Not Supported 00:30:37.295 Replay Protected Memory Block: Not Supported 00:30:37.295 00:30:37.295 Firmware Slot Information 00:30:37.295 ========================= 00:30:37.295 Active slot: 1 00:30:37.295 Slot 1 Firmware Revision: 1.0 00:30:37.295 00:30:37.295 00:30:37.295 Commands Supported and Effects 00:30:37.295 ============================== 00:30:37.295 Admin Commands 00:30:37.295 -------------- 00:30:37.295 Delete I/O Submission Queue (00h): Supported 00:30:37.295 Create I/O Submission Queue (01h): Supported 00:30:37.295 Get Log Page (02h): Supported 00:30:37.295 Delete I/O Completion Queue (04h): Supported 00:30:37.295 Create I/O Completion Queue (05h): Supported 00:30:37.295 Identify (06h): Supported 00:30:37.295 Abort (08h): Supported 00:30:37.295 Set Features (09h): Supported 00:30:37.295 Get Features (0Ah): Supported 00:30:37.295 Asynchronous Event Request (0Ch): Supported 00:30:37.295 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.295 Directive Send (19h): Supported 00:30:37.295 Directive Receive (1Ah): Supported 00:30:37.295 Virtualization Management (1Ch): Supported 00:30:37.295 Doorbell Buffer Config (7Ch): Supported 00:30:37.295 Format NVM (80h): Supported LBA-Change 00:30:37.295 I/O Commands 00:30:37.295 ------------ 00:30:37.295 Flush (00h): Supported LBA-Change 00:30:37.295 Write (01h): Supported LBA-Change 00:30:37.295 Read (02h): Supported 00:30:37.295 Compare (05h): Supported 00:30:37.295 Write Zeroes (08h): Supported LBA-Change 00:30:37.295 Dataset Management (09h): Supported LBA-Change 00:30:37.295 Unknown (0Ch): Supported 00:30:37.295 Unknown (12h): Supported 00:30:37.295 Copy (19h): Supported LBA-Change 00:30:37.295 Unknown (1Dh): Supported LBA-Change 00:30:37.295 00:30:37.295 Error Log 00:30:37.295 ========= 00:30:37.295 00:30:37.295 Arbitration 00:30:37.295 =========== 00:30:37.295 Arbitration Burst: no limit 00:30:37.295 00:30:37.295 Power Management 00:30:37.295 ================ 00:30:37.295 Number of Power States: 1 00:30:37.295 Current Power State: Power State #0 00:30:37.295 Power State #0: 00:30:37.295 Max Power: 25.00 W 00:30:37.295 Non-Operational State: Operational 00:30:37.295 Entry Latency: 16 microseconds 00:30:37.295 Exit Latency: 4 microseconds 00:30:37.295 Relative Read Throughput: 0 00:30:37.295 Relative Read Latency: 0 00:30:37.295 Relative Write Throughput: 0 00:30:37.295 Relative Write Latency: 0 00:30:37.295 Idle Power: Not Reported 00:30:37.295 Active Power: Not Reported 00:30:37.295 Non-Operational Permissive Mode: Not Supported 00:30:37.295 00:30:37.295 Health Information 00:30:37.295 ================== 00:30:37.295 Critical Warnings: 00:30:37.295 Available Spare Space: OK 00:30:37.295 Temperature: OK 00:30:37.295 Device Reliability: OK 00:30:37.295 Read Only: No 00:30:37.295 Volatile Memory Backup: OK 00:30:37.295 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.295 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.295 Available Spare: 0% 00:30:37.295 Available Spare Threshold: 0% 00:30:37.295 Life Percentage Used: 0% 00:30:37.295 Data Units Read: 669 00:30:37.295 Data Units Written: 597 00:30:37.295 Host Read Commands: 36623 00:30:37.295 Host Write Commands: 36409 00:30:37.295 Controller Busy Time: 0 minutes 00:30:37.295 Power Cycles: 0 00:30:37.295 Power On Hours: 0 hours 00:30:37.295 Unsafe Shutdowns: 0 00:30:37.295 Unrecoverable Media Errors: 0 00:30:37.295 Lifetime Error Log Entries: 0 00:30:37.295 Warning Temperature Time: 0 minutes 00:30:37.295 Critical Temperature Time: 0 minutes 00:30:37.295 00:30:37.295 Number of Queues 00:30:37.295 ================ 00:30:37.295 Number of I/O Submission Queues: 64 00:30:37.295 Number of I/O Completion Queues: 64 00:30:37.295 00:30:37.295 ZNS Specific Controller Data 00:30:37.295 ============================ 00:30:37.295 Zone Append Size Limit: 0 00:30:37.295 00:30:37.295 00:30:37.295 Active Namespaces 00:30:37.295 ================= 00:30:37.295 Namespace ID:1 00:30:37.295 Error Recovery Timeout: Unlimited 00:30:37.295 Command Set Identifier: NVM (00h) 00:30:37.295 Deallocate: Supported 00:30:37.295 Deallocated/Unwritten Error: Supported 00:30:37.295 Deallocated Read Value: All 0x00 00:30:37.295 Deallocate in Write Zeroes: Not Supported 00:30:37.295 Deallocated Guard Field: 0xFFFF 00:30:37.295 Flush: Supported 00:30:37.295 Reservation: Not Supported 00:30:37.295 Metadata Transferred as: Separate Metadata Buffer 00:30:37.295 Namespace Sharing Capabilities: Private 00:30:37.295 Size (in LBAs): 1548666 (5GiB) 00:30:37.295 Capacity (in LBAs): 1548666 (5GiB) 00:30:37.295 Utilization (in LBAs): 1548666 (5GiB) 00:30:37.295 Thin Provisioning: Not Supported 00:30:37.295 Per-NS Atomic Units: No 00:30:37.295 Maximum Single Source Range Length: 128 00:30:37.295 Maximum Copy Length: 128 00:30:37.295 Maximum Source Range Count: 128 00:30:37.295 NGUID/EUI64 Never Reused: No 00:30:37.295 Namespace Write Protected: No 00:30:37.295 Number of LBA Formats: 8 00:30:37.295 Current LBA Format: LBA Format #07 00:30:37.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.295 00:30:37.295 NVM Specific Namespace Data 00:30:37.295 =========================== 00:30:37.295 Logical Block Storage Tag Mask: 0 00:30:37.295 Protection Information Capabilities: 00:30:37.295 16b Guard Protection Information Storage Tag Support: No 00:30:37.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.295 Storage Tag Check Read Support: No 00:30:37.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.295 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:37.295 23:12:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:30:37.552 ===================================================== 00:30:37.552 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:37.552 ===================================================== 00:30:37.552 Controller Capabilities/Features 00:30:37.552 ================================ 00:30:37.552 Vendor ID: 1b36 00:30:37.552 Subsystem Vendor ID: 1af4 00:30:37.552 Serial Number: 12341 00:30:37.552 Model Number: QEMU NVMe Ctrl 00:30:37.552 Firmware Version: 8.0.0 00:30:37.552 Recommended Arb Burst: 6 00:30:37.552 IEEE OUI Identifier: 00 54 52 00:30:37.552 Multi-path I/O 00:30:37.552 May have multiple subsystem ports: No 00:30:37.552 May have multiple controllers: No 00:30:37.552 Associated with SR-IOV VF: No 00:30:37.552 Max Data Transfer Size: 524288 00:30:37.552 Max Number of Namespaces: 256 00:30:37.552 Max Number of I/O Queues: 64 00:30:37.552 NVMe Specification Version (VS): 1.4 00:30:37.552 NVMe Specification Version (Identify): 1.4 00:30:37.552 Maximum Queue Entries: 2048 00:30:37.552 Contiguous Queues Required: Yes 00:30:37.552 Arbitration Mechanisms Supported 00:30:37.552 Weighted Round Robin: Not Supported 00:30:37.552 Vendor Specific: Not Supported 00:30:37.552 Reset Timeout: 7500 ms 00:30:37.552 Doorbell Stride: 4 bytes 00:30:37.552 NVM Subsystem Reset: Not Supported 00:30:37.552 Command Sets Supported 00:30:37.552 NVM Command Set: Supported 00:30:37.552 Boot Partition: Not Supported 00:30:37.552 Memory Page Size Minimum: 4096 bytes 00:30:37.552 Memory Page Size Maximum: 65536 bytes 00:30:37.552 Persistent Memory Region: Not Supported 00:30:37.552 Optional Asynchronous Events Supported 00:30:37.552 Namespace Attribute Notices: Supported 00:30:37.552 Firmware Activation Notices: Not Supported 00:30:37.552 ANA Change Notices: Not Supported 00:30:37.552 PLE Aggregate Log Change Notices: Not Supported 00:30:37.552 LBA Status Info Alert Notices: Not Supported 00:30:37.552 EGE Aggregate Log Change Notices: Not Supported 00:30:37.552 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.552 Zone Descriptor Change Notices: Not Supported 00:30:37.552 Discovery Log Change Notices: Not Supported 00:30:37.552 Controller Attributes 00:30:37.552 128-bit Host Identifier: Not Supported 00:30:37.552 Non-Operational Permissive Mode: Not Supported 00:30:37.552 NVM Sets: Not Supported 00:30:37.552 Read Recovery Levels: Not Supported 00:30:37.552 Endurance Groups: Not Supported 00:30:37.552 Predictable Latency Mode: Not Supported 00:30:37.552 Traffic Based Keep ALive: Not Supported 00:30:37.552 Namespace Granularity: Not Supported 00:30:37.552 SQ Associations: Not Supported 00:30:37.552 UUID List: Not Supported 00:30:37.552 Multi-Domain Subsystem: Not Supported 00:30:37.552 Fixed Capacity Management: Not Supported 00:30:37.552 Variable Capacity Management: Not Supported 00:30:37.552 Delete Endurance Group: Not Supported 00:30:37.552 Delete NVM Set: Not Supported 00:30:37.552 Extended LBA Formats Supported: Supported 00:30:37.552 Flexible Data Placement Supported: Not Supported 00:30:37.552 00:30:37.552 Controller Memory Buffer Support 00:30:37.552 ================================ 00:30:37.552 Supported: No 00:30:37.552 00:30:37.552 Persistent Memory Region Support 00:30:37.552 ================================ 00:30:37.552 Supported: No 00:30:37.552 00:30:37.552 Admin Command Set Attributes 00:30:37.552 ============================ 00:30:37.552 Security Send/Receive: Not Supported 00:30:37.552 Format NVM: Supported 00:30:37.552 Firmware Activate/Download: Not Supported 00:30:37.552 Namespace Management: Supported 00:30:37.552 Device Self-Test: Not Supported 00:30:37.552 Directives: Supported 00:30:37.552 NVMe-MI: Not Supported 00:30:37.552 Virtualization Management: Not Supported 00:30:37.552 Doorbell Buffer Config: Supported 00:30:37.552 Get LBA Status Capability: Not Supported 00:30:37.552 Command & Feature Lockdown Capability: Not Supported 00:30:37.552 Abort Command Limit: 4 00:30:37.552 Async Event Request Limit: 4 00:30:37.552 Number of Firmware Slots: N/A 00:30:37.552 Firmware Slot 1 Read-Only: N/A 00:30:37.552 Firmware Activation Without Reset: N/A 00:30:37.552 Multiple Update Detection Support: N/A 00:30:37.552 Firmware Update Granularity: No Information Provided 00:30:37.552 Per-Namespace SMART Log: Yes 00:30:37.552 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.552 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:37.552 Command Effects Log Page: Supported 00:30:37.552 Get Log Page Extended Data: Supported 00:30:37.552 Telemetry Log Pages: Not Supported 00:30:37.552 Persistent Event Log Pages: Not Supported 00:30:37.552 Supported Log Pages Log Page: May Support 00:30:37.552 Commands Supported & Effects Log Page: Not Supported 00:30:37.552 Feature Identifiers & Effects Log Page:May Support 00:30:37.552 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.552 Data Area 4 for Telemetry Log: Not Supported 00:30:37.552 Error Log Page Entries Supported: 1 00:30:37.552 Keep Alive: Not Supported 00:30:37.552 00:30:37.552 NVM Command Set Attributes 00:30:37.552 ========================== 00:30:37.552 Submission Queue Entry Size 00:30:37.552 Max: 64 00:30:37.552 Min: 64 00:30:37.552 Completion Queue Entry Size 00:30:37.552 Max: 16 00:30:37.552 Min: 16 00:30:37.552 Number of Namespaces: 256 00:30:37.552 Compare Command: Supported 00:30:37.552 Write Uncorrectable Command: Not Supported 00:30:37.552 Dataset Management Command: Supported 00:30:37.552 Write Zeroes Command: Supported 00:30:37.552 Set Features Save Field: Supported 00:30:37.552 Reservations: Not Supported 00:30:37.552 Timestamp: Supported 00:30:37.552 Copy: Supported 00:30:37.552 Volatile Write Cache: Present 00:30:37.552 Atomic Write Unit (Normal): 1 00:30:37.552 Atomic Write Unit (PFail): 1 00:30:37.552 Atomic Compare & Write Unit: 1 00:30:37.552 Fused Compare & Write: Not Supported 00:30:37.552 Scatter-Gather List 00:30:37.552 SGL Command Set: Supported 00:30:37.552 SGL Keyed: Not Supported 00:30:37.552 SGL Bit Bucket Descriptor: Not Supported 00:30:37.552 SGL Metadata Pointer: Not Supported 00:30:37.552 Oversized SGL: Not Supported 00:30:37.553 SGL Metadata Address: Not Supported 00:30:37.553 SGL Offset: Not Supported 00:30:37.553 Transport SGL Data Block: Not Supported 00:30:37.553 Replay Protected Memory Block: Not Supported 00:30:37.553 00:30:37.553 Firmware Slot Information 00:30:37.553 ========================= 00:30:37.553 Active slot: 1 00:30:37.553 Slot 1 Firmware Revision: 1.0 00:30:37.553 00:30:37.553 00:30:37.553 Commands Supported and Effects 00:30:37.553 ============================== 00:30:37.553 Admin Commands 00:30:37.553 -------------- 00:30:37.553 Delete I/O Submission Queue (00h): Supported 00:30:37.553 Create I/O Submission Queue (01h): Supported 00:30:37.553 Get Log Page (02h): Supported 00:30:37.553 Delete I/O Completion Queue (04h): Supported 00:30:37.553 Create I/O Completion Queue (05h): Supported 00:30:37.553 Identify (06h): Supported 00:30:37.553 Abort (08h): Supported 00:30:37.553 Set Features (09h): Supported 00:30:37.553 Get Features (0Ah): Supported 00:30:37.553 Asynchronous Event Request (0Ch): Supported 00:30:37.553 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.553 Directive Send (19h): Supported 00:30:37.553 Directive Receive (1Ah): Supported 00:30:37.553 Virtualization Management (1Ch): Supported 00:30:37.553 Doorbell Buffer Config (7Ch): Supported 00:30:37.553 Format NVM (80h): Supported LBA-Change 00:30:37.553 I/O Commands 00:30:37.553 ------------ 00:30:37.553 Flush (00h): Supported LBA-Change 00:30:37.553 Write (01h): Supported LBA-Change 00:30:37.553 Read (02h): Supported 00:30:37.553 Compare (05h): Supported 00:30:37.553 Write Zeroes (08h): Supported LBA-Change 00:30:37.553 Dataset Management (09h): Supported LBA-Change 00:30:37.553 Unknown (0Ch): Supported 00:30:37.553 Unknown (12h): Supported 00:30:37.553 Copy (19h): Supported LBA-Change 00:30:37.553 Unknown (1Dh): Supported LBA-Change 00:30:37.553 00:30:37.553 Error Log 00:30:37.553 ========= 00:30:37.553 00:30:37.553 Arbitration 00:30:37.553 =========== 00:30:37.553 Arbitration Burst: no limit 00:30:37.553 00:30:37.553 Power Management 00:30:37.553 ================ 00:30:37.553 Number of Power States: 1 00:30:37.553 Current Power State: Power State #0 00:30:37.553 Power State #0: 00:30:37.553 Max Power: 25.00 W 00:30:37.553 Non-Operational State: Operational 00:30:37.553 Entry Latency: 16 microseconds 00:30:37.553 Exit Latency: 4 microseconds 00:30:37.553 Relative Read Throughput: 0 00:30:37.553 Relative Read Latency: 0 00:30:37.553 Relative Write Throughput: 0 00:30:37.553 Relative Write Latency: 0 00:30:37.553 Idle Power: Not Reported 00:30:37.553 Active Power: Not Reported 00:30:37.553 Non-Operational Permissive Mode: Not Supported 00:30:37.553 00:30:37.553 Health Information 00:30:37.553 ================== 00:30:37.553 Critical Warnings: 00:30:37.553 Available Spare Space: OK 00:30:37.553 Temperature: OK 00:30:37.553 Device Reliability: OK 00:30:37.553 Read Only: No 00:30:37.553 Volatile Memory Backup: OK 00:30:37.553 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.553 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.553 Available Spare: 0% 00:30:37.553 Available Spare Threshold: 0% 00:30:37.553 Life Percentage Used: 0% 00:30:37.553 Data Units Read: 1040 00:30:37.553 Data Units Written: 907 00:30:37.553 Host Read Commands: 55114 00:30:37.553 Host Write Commands: 53895 00:30:37.553 Controller Busy Time: 0 minutes 00:30:37.553 Power Cycles: 0 00:30:37.553 Power On Hours: 0 hours 00:30:37.553 Unsafe Shutdowns: 0 00:30:37.553 Unrecoverable Media Errors: 0 00:30:37.553 Lifetime Error Log Entries: 0 00:30:37.553 Warning Temperature Time: 0 minutes 00:30:37.553 Critical Temperature Time: 0 minutes 00:30:37.553 00:30:37.553 Number of Queues 00:30:37.553 ================ 00:30:37.553 Number of I/O Submission Queues: 64 00:30:37.553 Number of I/O Completion Queues: 64 00:30:37.553 00:30:37.553 ZNS Specific Controller Data 00:30:37.553 ============================ 00:30:37.553 Zone Append Size Limit: 0 00:30:37.553 00:30:37.553 00:30:37.553 Active Namespaces 00:30:37.553 ================= 00:30:37.553 Namespace ID:1 00:30:37.553 Error Recovery Timeout: Unlimited 00:30:37.553 Command Set Identifier: NVM (00h) 00:30:37.553 Deallocate: Supported 00:30:37.553 Deallocated/Unwritten Error: Supported 00:30:37.553 Deallocated Read Value: All 0x00 00:30:37.553 Deallocate in Write Zeroes: Not Supported 00:30:37.553 Deallocated Guard Field: 0xFFFF 00:30:37.553 Flush: Supported 00:30:37.553 Reservation: Not Supported 00:30:37.553 Namespace Sharing Capabilities: Private 00:30:37.553 Size (in LBAs): 1310720 (5GiB) 00:30:37.553 Capacity (in LBAs): 1310720 (5GiB) 00:30:37.553 Utilization (in LBAs): 1310720 (5GiB) 00:30:37.553 Thin Provisioning: Not Supported 00:30:37.553 Per-NS Atomic Units: No 00:30:37.553 Maximum Single Source Range Length: 128 00:30:37.553 Maximum Copy Length: 128 00:30:37.553 Maximum Source Range Count: 128 00:30:37.553 NGUID/EUI64 Never Reused: No 00:30:37.553 Namespace Write Protected: No 00:30:37.553 Number of LBA Formats: 8 00:30:37.553 Current LBA Format: LBA Format #04 00:30:37.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.553 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.553 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.553 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.553 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.553 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.553 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.553 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.553 00:30:37.553 NVM Specific Namespace Data 00:30:37.553 =========================== 00:30:37.553 Logical Block Storage Tag Mask: 0 00:30:37.553 Protection Information Capabilities: 00:30:37.553 16b Guard Protection Information Storage Tag Support: No 00:30:37.553 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.553 Storage Tag Check Read Support: No 00:30:37.553 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.553 23:12:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:37.553 23:12:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:30:37.811 ===================================================== 00:30:37.811 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:37.811 ===================================================== 00:30:37.811 Controller Capabilities/Features 00:30:37.811 ================================ 00:30:37.811 Vendor ID: 1b36 00:30:37.811 Subsystem Vendor ID: 1af4 00:30:37.811 Serial Number: 12342 00:30:37.811 Model Number: QEMU NVMe Ctrl 00:30:37.811 Firmware Version: 8.0.0 00:30:37.811 Recommended Arb Burst: 6 00:30:37.811 IEEE OUI Identifier: 00 54 52 00:30:37.811 Multi-path I/O 00:30:37.811 May have multiple subsystem ports: No 00:30:37.811 May have multiple controllers: No 00:30:37.811 Associated with SR-IOV VF: No 00:30:37.811 Max Data Transfer Size: 524288 00:30:37.811 Max Number of Namespaces: 256 00:30:37.811 Max Number of I/O Queues: 64 00:30:37.811 NVMe Specification Version (VS): 1.4 00:30:37.811 NVMe Specification Version (Identify): 1.4 00:30:37.811 Maximum Queue Entries: 2048 00:30:37.811 Contiguous Queues Required: Yes 00:30:37.811 Arbitration Mechanisms Supported 00:30:37.811 Weighted Round Robin: Not Supported 00:30:37.811 Vendor Specific: Not Supported 00:30:37.811 Reset Timeout: 7500 ms 00:30:37.811 Doorbell Stride: 4 bytes 00:30:37.811 NVM Subsystem Reset: Not Supported 00:30:37.811 Command Sets Supported 00:30:37.811 NVM Command Set: Supported 00:30:37.811 Boot Partition: Not Supported 00:30:37.811 Memory Page Size Minimum: 4096 bytes 00:30:37.811 Memory Page Size Maximum: 65536 bytes 00:30:37.811 Persistent Memory Region: Not Supported 00:30:37.811 Optional Asynchronous Events Supported 00:30:37.811 Namespace Attribute Notices: Supported 00:30:37.811 Firmware Activation Notices: Not Supported 00:30:37.811 ANA Change Notices: Not Supported 00:30:37.811 PLE Aggregate Log Change Notices: Not Supported 00:30:37.811 LBA Status Info Alert Notices: Not Supported 00:30:37.811 EGE Aggregate Log Change Notices: Not Supported 00:30:37.811 Normal NVM Subsystem Shutdown event: Not Supported 00:30:37.811 Zone Descriptor Change Notices: Not Supported 00:30:37.811 Discovery Log Change Notices: Not Supported 00:30:37.811 Controller Attributes 00:30:37.811 128-bit Host Identifier: Not Supported 00:30:37.811 Non-Operational Permissive Mode: Not Supported 00:30:37.811 NVM Sets: Not Supported 00:30:37.811 Read Recovery Levels: Not Supported 00:30:37.811 Endurance Groups: Not Supported 00:30:37.811 Predictable Latency Mode: Not Supported 00:30:37.811 Traffic Based Keep ALive: Not Supported 00:30:37.811 Namespace Granularity: Not Supported 00:30:37.811 SQ Associations: Not Supported 00:30:37.811 UUID List: Not Supported 00:30:37.811 Multi-Domain Subsystem: Not Supported 00:30:37.811 Fixed Capacity Management: Not Supported 00:30:37.811 Variable Capacity Management: Not Supported 00:30:37.811 Delete Endurance Group: Not Supported 00:30:37.811 Delete NVM Set: Not Supported 00:30:37.811 Extended LBA Formats Supported: Supported 00:30:37.811 Flexible Data Placement Supported: Not Supported 00:30:37.811 00:30:37.811 Controller Memory Buffer Support 00:30:37.811 ================================ 00:30:37.811 Supported: No 00:30:37.811 00:30:37.811 Persistent Memory Region Support 00:30:37.811 ================================ 00:30:37.811 Supported: No 00:30:37.811 00:30:37.811 Admin Command Set Attributes 00:30:37.811 ============================ 00:30:37.811 Security Send/Receive: Not Supported 00:30:37.811 Format NVM: Supported 00:30:37.811 Firmware Activate/Download: Not Supported 00:30:37.811 Namespace Management: Supported 00:30:37.811 Device Self-Test: Not Supported 00:30:37.811 Directives: Supported 00:30:37.811 NVMe-MI: Not Supported 00:30:37.811 Virtualization Management: Not Supported 00:30:37.811 Doorbell Buffer Config: Supported 00:30:37.811 Get LBA Status Capability: Not Supported 00:30:37.811 Command & Feature Lockdown Capability: Not Supported 00:30:37.811 Abort Command Limit: 4 00:30:37.811 Async Event Request Limit: 4 00:30:37.811 Number of Firmware Slots: N/A 00:30:37.811 Firmware Slot 1 Read-Only: N/A 00:30:37.811 Firmware Activation Without Reset: N/A 00:30:37.811 Multiple Update Detection Support: N/A 00:30:37.811 Firmware Update Granularity: No Information Provided 00:30:37.811 Per-Namespace SMART Log: Yes 00:30:37.811 Asymmetric Namespace Access Log Page: Not Supported 00:30:37.811 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:37.811 Command Effects Log Page: Supported 00:30:37.811 Get Log Page Extended Data: Supported 00:30:37.811 Telemetry Log Pages: Not Supported 00:30:37.811 Persistent Event Log Pages: Not Supported 00:30:37.811 Supported Log Pages Log Page: May Support 00:30:37.811 Commands Supported & Effects Log Page: Not Supported 00:30:37.811 Feature Identifiers & Effects Log Page:May Support 00:30:37.811 NVMe-MI Commands & Effects Log Page: May Support 00:30:37.811 Data Area 4 for Telemetry Log: Not Supported 00:30:37.811 Error Log Page Entries Supported: 1 00:30:37.811 Keep Alive: Not Supported 00:30:37.811 00:30:37.811 NVM Command Set Attributes 00:30:37.811 ========================== 00:30:37.811 Submission Queue Entry Size 00:30:37.811 Max: 64 00:30:37.811 Min: 64 00:30:37.811 Completion Queue Entry Size 00:30:37.811 Max: 16 00:30:37.811 Min: 16 00:30:37.811 Number of Namespaces: 256 00:30:37.811 Compare Command: Supported 00:30:37.811 Write Uncorrectable Command: Not Supported 00:30:37.811 Dataset Management Command: Supported 00:30:37.811 Write Zeroes Command: Supported 00:30:37.811 Set Features Save Field: Supported 00:30:37.811 Reservations: Not Supported 00:30:37.811 Timestamp: Supported 00:30:37.811 Copy: Supported 00:30:37.811 Volatile Write Cache: Present 00:30:37.811 Atomic Write Unit (Normal): 1 00:30:37.811 Atomic Write Unit (PFail): 1 00:30:37.811 Atomic Compare & Write Unit: 1 00:30:37.811 Fused Compare & Write: Not Supported 00:30:37.811 Scatter-Gather List 00:30:37.811 SGL Command Set: Supported 00:30:37.811 SGL Keyed: Not Supported 00:30:37.811 SGL Bit Bucket Descriptor: Not Supported 00:30:37.811 SGL Metadata Pointer: Not Supported 00:30:37.811 Oversized SGL: Not Supported 00:30:37.811 SGL Metadata Address: Not Supported 00:30:37.811 SGL Offset: Not Supported 00:30:37.811 Transport SGL Data Block: Not Supported 00:30:37.811 Replay Protected Memory Block: Not Supported 00:30:37.811 00:30:37.811 Firmware Slot Information 00:30:37.811 ========================= 00:30:37.811 Active slot: 1 00:30:37.811 Slot 1 Firmware Revision: 1.0 00:30:37.811 00:30:37.811 00:30:37.811 Commands Supported and Effects 00:30:37.811 ============================== 00:30:37.811 Admin Commands 00:30:37.811 -------------- 00:30:37.811 Delete I/O Submission Queue (00h): Supported 00:30:37.811 Create I/O Submission Queue (01h): Supported 00:30:37.811 Get Log Page (02h): Supported 00:30:37.811 Delete I/O Completion Queue (04h): Supported 00:30:37.811 Create I/O Completion Queue (05h): Supported 00:30:37.811 Identify (06h): Supported 00:30:37.811 Abort (08h): Supported 00:30:37.811 Set Features (09h): Supported 00:30:37.811 Get Features (0Ah): Supported 00:30:37.811 Asynchronous Event Request (0Ch): Supported 00:30:37.811 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:37.811 Directive Send (19h): Supported 00:30:37.811 Directive Receive (1Ah): Supported 00:30:37.811 Virtualization Management (1Ch): Supported 00:30:37.811 Doorbell Buffer Config (7Ch): Supported 00:30:37.811 Format NVM (80h): Supported LBA-Change 00:30:37.811 I/O Commands 00:30:37.811 ------------ 00:30:37.811 Flush (00h): Supported LBA-Change 00:30:37.811 Write (01h): Supported LBA-Change 00:30:37.811 Read (02h): Supported 00:30:37.811 Compare (05h): Supported 00:30:37.811 Write Zeroes (08h): Supported LBA-Change 00:30:37.811 Dataset Management (09h): Supported LBA-Change 00:30:37.811 Unknown (0Ch): Supported 00:30:37.811 Unknown (12h): Supported 00:30:37.811 Copy (19h): Supported LBA-Change 00:30:37.811 Unknown (1Dh): Supported LBA-Change 00:30:37.811 00:30:37.811 Error Log 00:30:37.811 ========= 00:30:37.811 00:30:37.811 Arbitration 00:30:37.811 =========== 00:30:37.811 Arbitration Burst: no limit 00:30:37.811 00:30:37.811 Power Management 00:30:37.811 ================ 00:30:37.811 Number of Power States: 1 00:30:37.811 Current Power State: Power State #0 00:30:37.811 Power State #0: 00:30:37.811 Max Power: 25.00 W 00:30:37.811 Non-Operational State: Operational 00:30:37.811 Entry Latency: 16 microseconds 00:30:37.811 Exit Latency: 4 microseconds 00:30:37.811 Relative Read Throughput: 0 00:30:37.811 Relative Read Latency: 0 00:30:37.811 Relative Write Throughput: 0 00:30:37.811 Relative Write Latency: 0 00:30:37.811 Idle Power: Not Reported 00:30:37.811 Active Power: Not Reported 00:30:37.811 Non-Operational Permissive Mode: Not Supported 00:30:37.811 00:30:37.811 Health Information 00:30:37.811 ================== 00:30:37.811 Critical Warnings: 00:30:37.811 Available Spare Space: OK 00:30:37.811 Temperature: OK 00:30:37.811 Device Reliability: OK 00:30:37.811 Read Only: No 00:30:37.811 Volatile Memory Backup: OK 00:30:37.811 Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.811 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:37.811 Available Spare: 0% 00:30:37.811 Available Spare Threshold: 0% 00:30:37.811 Life Percentage Used: 0% 00:30:37.811 Data Units Read: 2189 00:30:37.811 Data Units Written: 1976 00:30:37.811 Host Read Commands: 112558 00:30:37.811 Host Write Commands: 110828 00:30:37.811 Controller Busy Time: 0 minutes 00:30:37.811 Power Cycles: 0 00:30:37.811 Power On Hours: 0 hours 00:30:37.811 Unsafe Shutdowns: 0 00:30:37.811 Unrecoverable Media Errors: 0 00:30:37.811 Lifetime Error Log Entries: 0 00:30:37.811 Warning Temperature Time: 0 minutes 00:30:37.811 Critical Temperature Time: 0 minutes 00:30:37.811 00:30:37.811 Number of Queues 00:30:37.811 ================ 00:30:37.811 Number of I/O Submission Queues: 64 00:30:37.811 Number of I/O Completion Queues: 64 00:30:37.811 00:30:37.811 ZNS Specific Controller Data 00:30:37.811 ============================ 00:30:37.811 Zone Append Size Limit: 0 00:30:37.811 00:30:37.811 00:30:37.811 Active Namespaces 00:30:37.811 ================= 00:30:37.811 Namespace ID:1 00:30:37.811 Error Recovery Timeout: Unlimited 00:30:37.811 Command Set Identifier: NVM (00h) 00:30:37.811 Deallocate: Supported 00:30:37.811 Deallocated/Unwritten Error: Supported 00:30:37.811 Deallocated Read Value: All 0x00 00:30:37.811 Deallocate in Write Zeroes: Not Supported 00:30:37.811 Deallocated Guard Field: 0xFFFF 00:30:37.812 Flush: Supported 00:30:37.812 Reservation: Not Supported 00:30:37.812 Namespace Sharing Capabilities: Private 00:30:37.812 Size (in LBAs): 1048576 (4GiB) 00:30:37.812 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.812 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.812 Thin Provisioning: Not Supported 00:30:37.812 Per-NS Atomic Units: No 00:30:37.812 Maximum Single Source Range Length: 128 00:30:37.812 Maximum Copy Length: 128 00:30:37.812 Maximum Source Range Count: 128 00:30:37.812 NGUID/EUI64 Never Reused: No 00:30:37.812 Namespace Write Protected: No 00:30:37.812 Number of LBA Formats: 8 00:30:37.812 Current LBA Format: LBA Format #04 00:30:37.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.812 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.812 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.812 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.812 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.812 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.812 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.812 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.812 00:30:37.812 NVM Specific Namespace Data 00:30:37.812 =========================== 00:30:37.812 Logical Block Storage Tag Mask: 0 00:30:37.812 Protection Information Capabilities: 00:30:37.812 16b Guard Protection Information Storage Tag Support: No 00:30:37.812 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.812 Storage Tag Check Read Support: No 00:30:37.812 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Namespace ID:2 00:30:37.812 Error Recovery Timeout: Unlimited 00:30:37.812 Command Set Identifier: NVM (00h) 00:30:37.812 Deallocate: Supported 00:30:37.812 Deallocated/Unwritten Error: Supported 00:30:37.812 Deallocated Read Value: All 0x00 00:30:37.812 Deallocate in Write Zeroes: Not Supported 00:30:37.812 Deallocated Guard Field: 0xFFFF 00:30:37.812 Flush: Supported 00:30:37.812 Reservation: Not Supported 00:30:37.812 Namespace Sharing Capabilities: Private 00:30:37.812 Size (in LBAs): 1048576 (4GiB) 00:30:37.812 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.812 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.812 Thin Provisioning: Not Supported 00:30:37.812 Per-NS Atomic Units: No 00:30:37.812 Maximum Single Source Range Length: 128 00:30:37.812 Maximum Copy Length: 128 00:30:37.812 Maximum Source Range Count: 128 00:30:37.812 NGUID/EUI64 Never Reused: No 00:30:37.812 Namespace Write Protected: No 00:30:37.812 Number of LBA Formats: 8 00:30:37.812 Current LBA Format: LBA Format #04 00:30:37.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.812 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.812 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.812 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.812 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.812 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.812 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.812 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.812 00:30:37.812 NVM Specific Namespace Data 00:30:37.812 =========================== 00:30:37.812 Logical Block Storage Tag Mask: 0 00:30:37.812 Protection Information Capabilities: 00:30:37.812 16b Guard Protection Information Storage Tag Support: No 00:30:37.812 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.812 Storage Tag Check Read Support: No 00:30:37.812 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Namespace ID:3 00:30:37.812 Error Recovery Timeout: Unlimited 00:30:37.812 Command Set Identifier: NVM (00h) 00:30:37.812 Deallocate: Supported 00:30:37.812 Deallocated/Unwritten Error: Supported 00:30:37.812 Deallocated Read Value: All 0x00 00:30:37.812 Deallocate in Write Zeroes: Not Supported 00:30:37.812 Deallocated Guard Field: 0xFFFF 00:30:37.812 Flush: Supported 00:30:37.812 Reservation: Not Supported 00:30:37.812 Namespace Sharing Capabilities: Private 00:30:37.812 Size (in LBAs): 1048576 (4GiB) 00:30:37.812 Capacity (in LBAs): 1048576 (4GiB) 00:30:37.812 Utilization (in LBAs): 1048576 (4GiB) 00:30:37.812 Thin Provisioning: Not Supported 00:30:37.812 Per-NS Atomic Units: No 00:30:37.812 Maximum Single Source Range Length: 128 00:30:37.812 Maximum Copy Length: 128 00:30:37.812 Maximum Source Range Count: 128 00:30:37.812 NGUID/EUI64 Never Reused: No 00:30:37.812 Namespace Write Protected: No 00:30:37.812 Number of LBA Formats: 8 00:30:37.812 Current LBA Format: LBA Format #04 00:30:37.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:37.812 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:37.812 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:37.812 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:37.812 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:37.812 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:37.812 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:37.812 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:37.812 00:30:37.812 NVM Specific Namespace Data 00:30:37.812 =========================== 00:30:37.812 Logical Block Storage Tag Mask: 0 00:30:37.812 Protection Information Capabilities: 00:30:37.812 16b Guard Protection Information Storage Tag Support: No 00:30:37.812 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:37.812 Storage Tag Check Read Support: No 00:30:37.812 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:37.812 23:12:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:37.812 23:12:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:30:38.070 ===================================================== 00:30:38.070 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:38.070 ===================================================== 00:30:38.070 Controller Capabilities/Features 00:30:38.070 ================================ 00:30:38.070 Vendor ID: 1b36 00:30:38.070 Subsystem Vendor ID: 1af4 00:30:38.070 Serial Number: 12343 00:30:38.070 Model Number: QEMU NVMe Ctrl 00:30:38.070 Firmware Version: 8.0.0 00:30:38.070 Recommended Arb Burst: 6 00:30:38.070 IEEE OUI Identifier: 00 54 52 00:30:38.070 Multi-path I/O 00:30:38.070 May have multiple subsystem ports: No 00:30:38.070 May have multiple controllers: Yes 00:30:38.070 Associated with SR-IOV VF: No 00:30:38.070 Max Data Transfer Size: 524288 00:30:38.070 Max Number of Namespaces: 256 00:30:38.070 Max Number of I/O Queues: 64 00:30:38.070 NVMe Specification Version (VS): 1.4 00:30:38.070 NVMe Specification Version (Identify): 1.4 00:30:38.070 Maximum Queue Entries: 2048 00:30:38.070 Contiguous Queues Required: Yes 00:30:38.070 Arbitration Mechanisms Supported 00:30:38.070 Weighted Round Robin: Not Supported 00:30:38.070 Vendor Specific: Not Supported 00:30:38.070 Reset Timeout: 7500 ms 00:30:38.070 Doorbell Stride: 4 bytes 00:30:38.070 NVM Subsystem Reset: Not Supported 00:30:38.070 Command Sets Supported 00:30:38.070 NVM Command Set: Supported 00:30:38.070 Boot Partition: Not Supported 00:30:38.070 Memory Page Size Minimum: 4096 bytes 00:30:38.070 Memory Page Size Maximum: 65536 bytes 00:30:38.070 Persistent Memory Region: Not Supported 00:30:38.070 Optional Asynchronous Events Supported 00:30:38.070 Namespace Attribute Notices: Supported 00:30:38.070 Firmware Activation Notices: Not Supported 00:30:38.070 ANA Change Notices: Not Supported 00:30:38.070 PLE Aggregate Log Change Notices: Not Supported 00:30:38.070 LBA Status Info Alert Notices: Not Supported 00:30:38.070 EGE Aggregate Log Change Notices: Not Supported 00:30:38.070 Normal NVM Subsystem Shutdown event: Not Supported 00:30:38.070 Zone Descriptor Change Notices: Not Supported 00:30:38.070 Discovery Log Change Notices: Not Supported 00:30:38.070 Controller Attributes 00:30:38.070 128-bit Host Identifier: Not Supported 00:30:38.070 Non-Operational Permissive Mode: Not Supported 00:30:38.070 NVM Sets: Not Supported 00:30:38.070 Read Recovery Levels: Not Supported 00:30:38.070 Endurance Groups: Supported 00:30:38.070 Predictable Latency Mode: Not Supported 00:30:38.070 Traffic Based Keep ALive: Not Supported 00:30:38.070 Namespace Granularity: Not Supported 00:30:38.070 SQ Associations: Not Supported 00:30:38.070 UUID List: Not Supported 00:30:38.070 Multi-Domain Subsystem: Not Supported 00:30:38.070 Fixed Capacity Management: Not Supported 00:30:38.070 Variable Capacity Management: Not Supported 00:30:38.070 Delete Endurance Group: Not Supported 00:30:38.070 Delete NVM Set: Not Supported 00:30:38.070 Extended LBA Formats Supported: Supported 00:30:38.070 Flexible Data Placement Supported: Supported 00:30:38.070 00:30:38.070 Controller Memory Buffer Support 00:30:38.070 ================================ 00:30:38.070 Supported: No 00:30:38.070 00:30:38.070 Persistent Memory Region Support 00:30:38.070 ================================ 00:30:38.070 Supported: No 00:30:38.070 00:30:38.070 Admin Command Set Attributes 00:30:38.070 ============================ 00:30:38.070 Security Send/Receive: Not Supported 00:30:38.070 Format NVM: Supported 00:30:38.070 Firmware Activate/Download: Not Supported 00:30:38.070 Namespace Management: Supported 00:30:38.070 Device Self-Test: Not Supported 00:30:38.070 Directives: Supported 00:30:38.070 NVMe-MI: Not Supported 00:30:38.070 Virtualization Management: Not Supported 00:30:38.070 Doorbell Buffer Config: Supported 00:30:38.070 Get LBA Status Capability: Not Supported 00:30:38.070 Command & Feature Lockdown Capability: Not Supported 00:30:38.070 Abort Command Limit: 4 00:30:38.070 Async Event Request Limit: 4 00:30:38.070 Number of Firmware Slots: N/A 00:30:38.070 Firmware Slot 1 Read-Only: N/A 00:30:38.070 Firmware Activation Without Reset: N/A 00:30:38.070 Multiple Update Detection Support: N/A 00:30:38.070 Firmware Update Granularity: No Information Provided 00:30:38.070 Per-Namespace SMART Log: Yes 00:30:38.070 Asymmetric Namespace Access Log Page: Not Supported 00:30:38.070 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:38.070 Command Effects Log Page: Supported 00:30:38.070 Get Log Page Extended Data: Supported 00:30:38.070 Telemetry Log Pages: Not Supported 00:30:38.071 Persistent Event Log Pages: Not Supported 00:30:38.071 Supported Log Pages Log Page: May Support 00:30:38.071 Commands Supported & Effects Log Page: Not Supported 00:30:38.071 Feature Identifiers & Effects Log Page:May Support 00:30:38.071 NVMe-MI Commands & Effects Log Page: May Support 00:30:38.071 Data Area 4 for Telemetry Log: Not Supported 00:30:38.071 Error Log Page Entries Supported: 1 00:30:38.071 Keep Alive: Not Supported 00:30:38.071 00:30:38.071 NVM Command Set Attributes 00:30:38.071 ========================== 00:30:38.071 Submission Queue Entry Size 00:30:38.071 Max: 64 00:30:38.071 Min: 64 00:30:38.071 Completion Queue Entry Size 00:30:38.071 Max: 16 00:30:38.071 Min: 16 00:30:38.071 Number of Namespaces: 256 00:30:38.071 Compare Command: Supported 00:30:38.071 Write Uncorrectable Command: Not Supported 00:30:38.071 Dataset Management Command: Supported 00:30:38.071 Write Zeroes Command: Supported 00:30:38.071 Set Features Save Field: Supported 00:30:38.071 Reservations: Not Supported 00:30:38.071 Timestamp: Supported 00:30:38.071 Copy: Supported 00:30:38.071 Volatile Write Cache: Present 00:30:38.071 Atomic Write Unit (Normal): 1 00:30:38.071 Atomic Write Unit (PFail): 1 00:30:38.071 Atomic Compare & Write Unit: 1 00:30:38.071 Fused Compare & Write: Not Supported 00:30:38.071 Scatter-Gather List 00:30:38.071 SGL Command Set: Supported 00:30:38.071 SGL Keyed: Not Supported 00:30:38.071 SGL Bit Bucket Descriptor: Not Supported 00:30:38.071 SGL Metadata Pointer: Not Supported 00:30:38.071 Oversized SGL: Not Supported 00:30:38.071 SGL Metadata Address: Not Supported 00:30:38.071 SGL Offset: Not Supported 00:30:38.071 Transport SGL Data Block: Not Supported 00:30:38.071 Replay Protected Memory Block: Not Supported 00:30:38.071 00:30:38.071 Firmware Slot Information 00:30:38.071 ========================= 00:30:38.071 Active slot: 1 00:30:38.071 Slot 1 Firmware Revision: 1.0 00:30:38.071 00:30:38.071 00:30:38.071 Commands Supported and Effects 00:30:38.071 ============================== 00:30:38.071 Admin Commands 00:30:38.071 -------------- 00:30:38.071 Delete I/O Submission Queue (00h): Supported 00:30:38.071 Create I/O Submission Queue (01h): Supported 00:30:38.071 Get Log Page (02h): Supported 00:30:38.071 Delete I/O Completion Queue (04h): Supported 00:30:38.071 Create I/O Completion Queue (05h): Supported 00:30:38.071 Identify (06h): Supported 00:30:38.071 Abort (08h): Supported 00:30:38.071 Set Features (09h): Supported 00:30:38.071 Get Features (0Ah): Supported 00:30:38.071 Asynchronous Event Request (0Ch): Supported 00:30:38.071 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:38.071 Directive Send (19h): Supported 00:30:38.071 Directive Receive (1Ah): Supported 00:30:38.071 Virtualization Management (1Ch): Supported 00:30:38.071 Doorbell Buffer Config (7Ch): Supported 00:30:38.071 Format NVM (80h): Supported LBA-Change 00:30:38.071 I/O Commands 00:30:38.071 ------------ 00:30:38.071 Flush (00h): Supported LBA-Change 00:30:38.071 Write (01h): Supported LBA-Change 00:30:38.071 Read (02h): Supported 00:30:38.071 Compare (05h): Supported 00:30:38.071 Write Zeroes (08h): Supported LBA-Change 00:30:38.071 Dataset Management (09h): Supported LBA-Change 00:30:38.071 Unknown (0Ch): Supported 00:30:38.071 Unknown (12h): Supported 00:30:38.071 Copy (19h): Supported LBA-Change 00:30:38.071 Unknown (1Dh): Supported LBA-Change 00:30:38.071 00:30:38.071 Error Log 00:30:38.071 ========= 00:30:38.071 00:30:38.071 Arbitration 00:30:38.071 =========== 00:30:38.071 Arbitration Burst: no limit 00:30:38.071 00:30:38.071 Power Management 00:30:38.071 ================ 00:30:38.071 Number of Power States: 1 00:30:38.071 Current Power State: Power State #0 00:30:38.071 Power State #0: 00:30:38.071 Max Power: 25.00 W 00:30:38.071 Non-Operational State: Operational 00:30:38.071 Entry Latency: 16 microseconds 00:30:38.071 Exit Latency: 4 microseconds 00:30:38.071 Relative Read Throughput: 0 00:30:38.071 Relative Read Latency: 0 00:30:38.071 Relative Write Throughput: 0 00:30:38.071 Relative Write Latency: 0 00:30:38.071 Idle Power: Not Reported 00:30:38.071 Active Power: Not Reported 00:30:38.071 Non-Operational Permissive Mode: Not Supported 00:30:38.071 00:30:38.071 Health Information 00:30:38.071 ================== 00:30:38.071 Critical Warnings: 00:30:38.071 Available Spare Space: OK 00:30:38.071 Temperature: OK 00:30:38.071 Device Reliability: OK 00:30:38.071 Read Only: No 00:30:38.071 Volatile Memory Backup: OK 00:30:38.071 Current Temperature: 323 Kelvin (50 Celsius) 00:30:38.071 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:38.071 Available Spare: 0% 00:30:38.071 Available Spare Threshold: 0% 00:30:38.071 Life Percentage Used: 0% 00:30:38.071 Data Units Read: 803 00:30:38.071 Data Units Written: 732 00:30:38.071 Host Read Commands: 38090 00:30:38.071 Host Write Commands: 37515 00:30:38.071 Controller Busy Time: 0 minutes 00:30:38.071 Power Cycles: 0 00:30:38.071 Power On Hours: 0 hours 00:30:38.071 Unsafe Shutdowns: 0 00:30:38.071 Unrecoverable Media Errors: 0 00:30:38.071 Lifetime Error Log Entries: 0 00:30:38.071 Warning Temperature Time: 0 minutes 00:30:38.071 Critical Temperature Time: 0 minutes 00:30:38.071 00:30:38.071 Number of Queues 00:30:38.071 ================ 00:30:38.071 Number of I/O Submission Queues: 64 00:30:38.071 Number of I/O Completion Queues: 64 00:30:38.071 00:30:38.071 ZNS Specific Controller Data 00:30:38.071 ============================ 00:30:38.071 Zone Append Size Limit: 0 00:30:38.071 00:30:38.071 00:30:38.071 Active Namespaces 00:30:38.071 ================= 00:30:38.071 Namespace ID:1 00:30:38.071 Error Recovery Timeout: Unlimited 00:30:38.071 Command Set Identifier: NVM (00h) 00:30:38.071 Deallocate: Supported 00:30:38.071 Deallocated/Unwritten Error: Supported 00:30:38.071 Deallocated Read Value: All 0x00 00:30:38.071 Deallocate in Write Zeroes: Not Supported 00:30:38.071 Deallocated Guard Field: 0xFFFF 00:30:38.071 Flush: Supported 00:30:38.071 Reservation: Not Supported 00:30:38.071 Namespace Sharing Capabilities: Multiple Controllers 00:30:38.071 Size (in LBAs): 262144 (1GiB) 00:30:38.071 Capacity (in LBAs): 262144 (1GiB) 00:30:38.071 Utilization (in LBAs): 262144 (1GiB) 00:30:38.071 Thin Provisioning: Not Supported 00:30:38.071 Per-NS Atomic Units: No 00:30:38.071 Maximum Single Source Range Length: 128 00:30:38.071 Maximum Copy Length: 128 00:30:38.071 Maximum Source Range Count: 128 00:30:38.071 NGUID/EUI64 Never Reused: No 00:30:38.071 Namespace Write Protected: No 00:30:38.071 Endurance group ID: 1 00:30:38.071 Number of LBA Formats: 8 00:30:38.071 Current LBA Format: LBA Format #04 00:30:38.071 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:38.071 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:38.071 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:38.071 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:38.071 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:38.071 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:38.071 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:38.071 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:38.071 00:30:38.071 Get Feature FDP: 00:30:38.071 ================ 00:30:38.071 Enabled: Yes 00:30:38.071 FDP configuration index: 0 00:30:38.071 00:30:38.071 FDP configurations log page 00:30:38.071 =========================== 00:30:38.071 Number of FDP configurations: 1 00:30:38.071 Version: 0 00:30:38.071 Size: 112 00:30:38.071 FDP Configuration Descriptor: 0 00:30:38.071 Descriptor Size: 96 00:30:38.071 Reclaim Group Identifier format: 2 00:30:38.071 FDP Volatile Write Cache: Not Present 00:30:38.071 FDP Configuration: Valid 00:30:38.071 Vendor Specific Size: 0 00:30:38.071 Number of Reclaim Groups: 2 00:30:38.071 Number of Recalim Unit Handles: 8 00:30:38.071 Max Placement Identifiers: 128 00:30:38.071 Number of Namespaces Suppprted: 256 00:30:38.071 Reclaim unit Nominal Size: 6000000 bytes 00:30:38.071 Estimated Reclaim Unit Time Limit: Not Reported 00:30:38.071 RUH Desc #000: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #001: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #002: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #003: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #004: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #005: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #006: RUH Type: Initially Isolated 00:30:38.071 RUH Desc #007: RUH Type: Initially Isolated 00:30:38.071 00:30:38.071 FDP reclaim unit handle usage log page 00:30:38.071 ====================================== 00:30:38.071 Number of Reclaim Unit Handles: 8 00:30:38.071 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:38.071 RUH Usage Desc #001: RUH Attributes: Unused 00:30:38.071 RUH Usage Desc #002: RUH Attributes: Unused 00:30:38.071 RUH Usage Desc #003: RUH Attributes: Unused 00:30:38.071 RUH Usage Desc #004: RUH Attributes: Unused 00:30:38.071 RUH Usage Desc #005: RUH Attributes: Unused 00:30:38.071 RUH Usage Desc #006: RUH Attributes: Unused 00:30:38.072 RUH Usage Desc #007: RUH Attributes: Unused 00:30:38.072 00:30:38.072 FDP statistics log page 00:30:38.072 ======================= 00:30:38.072 Host bytes with metadata written: 469934080 00:30:38.072 Media bytes with metadata written: 469987328 00:30:38.072 Media bytes erased: 0 00:30:38.072 00:30:38.072 FDP events log page 00:30:38.072 =================== 00:30:38.072 Number of FDP events: 0 00:30:38.072 00:30:38.072 NVM Specific Namespace Data 00:30:38.072 =========================== 00:30:38.072 Logical Block Storage Tag Mask: 0 00:30:38.072 Protection Information Capabilities: 00:30:38.072 16b Guard Protection Information Storage Tag Support: No 00:30:38.072 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:38.072 Storage Tag Check Read Support: No 00:30:38.072 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:38.072 00:30:38.072 real 0m1.195s 00:30:38.072 user 0m0.437s 00:30:38.072 sys 0m0.539s 00:30:38.072 23:12:18 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.072 23:12:18 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.072 ************************************ 00:30:38.072 END TEST nvme_identify 00:30:38.072 ************************************ 00:30:38.072 23:12:18 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:38.072 23:12:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:38.072 23:12:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.072 23:12:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:38.072 ************************************ 00:30:38.072 START TEST nvme_perf 00:30:38.072 ************************************ 00:30:38.072 23:12:18 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:30:38.072 23:12:18 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:39.447 Initializing NVMe Controllers 00:30:39.447 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:39.447 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:39.447 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:39.447 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:39.447 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:39.447 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:39.447 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:39.447 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:39.447 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:39.447 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:39.447 Initialization complete. Launching workers. 00:30:39.447 ======================================================== 00:30:39.447 Latency(us) 00:30:39.447 Device Information : IOPS MiB/s Average min max 00:30:39.447 PCIE (0000:00:10.0) NSID 1 from core 0: 17273.28 202.42 7426.96 5709.45 26447.13 00:30:39.447 PCIE (0000:00:11.0) NSID 1 from core 0: 17273.28 202.42 7419.04 5575.93 24937.48 00:30:39.447 PCIE (0000:00:13.0) NSID 1 from core 0: 17273.28 202.42 7409.40 5752.76 23568.14 00:30:39.447 PCIE (0000:00:12.0) NSID 1 from core 0: 17273.28 202.42 7399.72 5719.72 21975.06 00:30:39.447 PCIE (0000:00:12.0) NSID 2 from core 0: 17273.28 202.42 7389.97 5717.87 20429.94 00:30:39.447 PCIE (0000:00:12.0) NSID 3 from core 0: 17273.28 202.42 7380.30 5773.07 18817.33 00:30:39.447 ======================================================== 00:30:39.447 Total : 103639.70 1214.53 7404.23 5575.93 26447.13 00:30:39.447 00:30:39.447 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:39.447 ================================================================================= 00:30:39.447 1.00000% : 5873.034us 00:30:39.447 10.00000% : 6099.889us 00:30:39.447 25.00000% : 6402.363us 00:30:39.447 50.00000% : 6856.074us 00:30:39.447 75.00000% : 8217.206us 00:30:39.447 90.00000% : 9074.215us 00:30:39.447 95.00000% : 9779.988us 00:30:39.447 98.00000% : 10737.822us 00:30:39.447 99.00000% : 11594.831us 00:30:39.447 99.50000% : 19459.151us 00:30:39.447 99.90000% : 26012.751us 00:30:39.447 99.99000% : 26416.049us 00:30:39.447 99.99900% : 26617.698us 00:30:39.447 99.99990% : 26617.698us 00:30:39.447 99.99999% : 26617.698us 00:30:39.447 00:30:39.447 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:39.447 ================================================================================= 00:30:39.447 1.00000% : 5923.446us 00:30:39.447 10.00000% : 6150.302us 00:30:39.447 25.00000% : 6402.363us 00:30:39.447 50.00000% : 6856.074us 00:30:39.447 75.00000% : 8217.206us 00:30:39.447 90.00000% : 9023.803us 00:30:39.447 95.00000% : 9729.575us 00:30:39.447 98.00000% : 10687.409us 00:30:39.447 99.00000% : 11947.717us 00:30:39.447 99.50000% : 18652.554us 00:30:39.447 99.90000% : 24500.382us 00:30:39.447 99.99000% : 25004.505us 00:30:39.447 99.99900% : 25004.505us 00:30:39.447 99.99990% : 25004.505us 00:30:39.447 99.99999% : 25004.505us 00:30:39.447 00:30:39.447 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:39.447 ================================================================================= 00:30:39.447 1.00000% : 5948.652us 00:30:39.447 10.00000% : 6150.302us 00:30:39.447 25.00000% : 6402.363us 00:30:39.447 50.00000% : 6805.662us 00:30:39.447 75.00000% : 8267.618us 00:30:39.447 90.00000% : 9023.803us 00:30:39.447 95.00000% : 9628.751us 00:30:39.447 98.00000% : 10737.822us 00:30:39.447 99.00000% : 12199.778us 00:30:39.447 99.50000% : 17140.185us 00:30:39.447 99.90000% : 23088.837us 00:30:39.447 99.99000% : 23592.960us 00:30:39.447 99.99900% : 23592.960us 00:30:39.447 99.99990% : 23592.960us 00:30:39.447 99.99999% : 23592.960us 00:30:39.447 00:30:39.448 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:39.448 ================================================================================= 00:30:39.448 1.00000% : 5923.446us 00:30:39.448 10.00000% : 6150.302us 00:30:39.448 25.00000% : 6402.363us 00:30:39.448 50.00000% : 6805.662us 00:30:39.448 75.00000% : 8267.618us 00:30:39.448 90.00000% : 9023.803us 00:30:39.448 95.00000% : 9628.751us 00:30:39.448 98.00000% : 10687.409us 00:30:39.448 99.00000% : 12048.542us 00:30:39.448 99.50000% : 15627.815us 00:30:39.448 99.90000% : 21576.468us 00:30:39.448 99.99000% : 21979.766us 00:30:39.448 99.99900% : 21979.766us 00:30:39.448 99.99990% : 21979.766us 00:30:39.448 99.99999% : 21979.766us 00:30:39.448 00:30:39.448 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:39.448 ================================================================================= 00:30:39.448 1.00000% : 5923.446us 00:30:39.448 10.00000% : 6150.302us 00:30:39.448 25.00000% : 6402.363us 00:30:39.448 50.00000% : 6856.074us 00:30:39.448 75.00000% : 8267.618us 00:30:39.448 90.00000% : 9023.803us 00:30:39.448 95.00000% : 9628.751us 00:30:39.448 98.00000% : 10838.646us 00:30:39.448 99.00000% : 11897.305us 00:30:39.448 99.50000% : 14014.622us 00:30:39.448 99.90000% : 20064.098us 00:30:39.448 99.99000% : 20467.397us 00:30:39.448 99.99900% : 20467.397us 00:30:39.448 99.99990% : 20467.397us 00:30:39.448 99.99999% : 20467.397us 00:30:39.448 00:30:39.448 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:39.448 ================================================================================= 00:30:39.448 1.00000% : 5948.652us 00:30:39.448 10.00000% : 6150.302us 00:30:39.448 25.00000% : 6402.363us 00:30:39.448 50.00000% : 6856.074us 00:30:39.448 75.00000% : 8217.206us 00:30:39.448 90.00000% : 9023.803us 00:30:39.448 95.00000% : 9679.163us 00:30:39.448 98.00000% : 10939.471us 00:30:39.448 99.00000% : 11645.243us 00:30:39.448 99.50000% : 12603.077us 00:30:39.448 99.90000% : 18350.080us 00:30:39.448 99.99000% : 18854.203us 00:30:39.448 99.99900% : 18854.203us 00:30:39.448 99.99990% : 18854.203us 00:30:39.448 99.99999% : 18854.203us 00:30:39.448 00:30:39.448 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:39.448 ============================================================================== 00:30:39.448 Range in us Cumulative IO count 00:30:39.448 5696.591 - 5721.797: 0.0231% ( 4) 00:30:39.448 5721.797 - 5747.003: 0.0807% ( 10) 00:30:39.448 5747.003 - 5772.209: 0.2306% ( 26) 00:30:39.448 5772.209 - 5797.415: 0.4151% ( 32) 00:30:39.448 5797.415 - 5822.622: 0.6688% ( 44) 00:30:39.448 5822.622 - 5847.828: 0.9456% ( 48) 00:30:39.448 5847.828 - 5873.034: 1.4184% ( 82) 00:30:39.448 5873.034 - 5898.240: 1.9949% ( 100) 00:30:39.448 5898.240 - 5923.446: 2.7387% ( 129) 00:30:39.448 5923.446 - 5948.652: 3.6324% ( 155) 00:30:39.448 5948.652 - 5973.858: 4.4915% ( 149) 00:30:39.448 5973.858 - 5999.065: 5.5005% ( 175) 00:30:39.448 5999.065 - 6024.271: 6.6132% ( 193) 00:30:39.448 6024.271 - 6049.477: 7.8183% ( 209) 00:30:39.448 6049.477 - 6074.683: 9.0521% ( 214) 00:30:39.448 6074.683 - 6099.889: 10.2629% ( 210) 00:30:39.448 6099.889 - 6125.095: 11.5602% ( 225) 00:30:39.448 6125.095 - 6150.302: 12.7652% ( 209) 00:30:39.448 6150.302 - 6175.508: 14.1144% ( 234) 00:30:39.448 6175.508 - 6200.714: 15.3079% ( 207) 00:30:39.448 6200.714 - 6225.920: 16.6628% ( 235) 00:30:39.448 6225.920 - 6251.126: 18.0005% ( 232) 00:30:39.448 6251.126 - 6276.332: 19.3381% ( 232) 00:30:39.448 6276.332 - 6301.538: 20.6469% ( 227) 00:30:39.448 6301.538 - 6326.745: 21.9442% ( 225) 00:30:39.448 6326.745 - 6351.951: 23.4375% ( 259) 00:30:39.448 6351.951 - 6377.157: 24.6887% ( 217) 00:30:39.448 6377.157 - 6402.363: 26.0551% ( 237) 00:30:39.448 6402.363 - 6427.569: 27.4619% ( 244) 00:30:39.448 6427.569 - 6452.775: 28.9322% ( 255) 00:30:39.448 6452.775 - 6503.188: 31.7286% ( 485) 00:30:39.448 6503.188 - 6553.600: 34.5307% ( 486) 00:30:39.448 6553.600 - 6604.012: 37.3155% ( 483) 00:30:39.448 6604.012 - 6654.425: 40.1061% ( 484) 00:30:39.448 6654.425 - 6704.837: 43.0581% ( 512) 00:30:39.448 6704.837 - 6755.249: 45.8141% ( 478) 00:30:39.448 6755.249 - 6805.662: 48.3683% ( 443) 00:30:39.448 6805.662 - 6856.074: 50.5996% ( 387) 00:30:39.448 6856.074 - 6906.486: 52.3351% ( 301) 00:30:39.448 6906.486 - 6956.898: 53.7131% ( 239) 00:30:39.448 6956.898 - 7007.311: 54.8720% ( 201) 00:30:39.448 7007.311 - 7057.723: 55.8233% ( 165) 00:30:39.448 7057.723 - 7108.135: 56.5959% ( 134) 00:30:39.448 7108.135 - 7158.548: 57.2878% ( 120) 00:30:39.448 7158.548 - 7208.960: 57.9048% ( 107) 00:30:39.448 7208.960 - 7259.372: 58.4525% ( 95) 00:30:39.448 7259.372 - 7309.785: 59.0175% ( 98) 00:30:39.448 7309.785 - 7360.197: 59.6345% ( 107) 00:30:39.448 7360.197 - 7410.609: 60.3379% ( 122) 00:30:39.448 7410.609 - 7461.022: 61.1739% ( 145) 00:30:39.448 7461.022 - 7511.434: 61.9984% ( 143) 00:30:39.448 7511.434 - 7561.846: 62.7422% ( 129) 00:30:39.448 7561.846 - 7612.258: 63.5609% ( 142) 00:30:39.448 7612.258 - 7662.671: 64.4661% ( 157) 00:30:39.448 7662.671 - 7713.083: 65.3425% ( 152) 00:30:39.448 7713.083 - 7763.495: 66.2938% ( 165) 00:30:39.448 7763.495 - 7813.908: 67.3143% ( 177) 00:30:39.448 7813.908 - 7864.320: 68.3291% ( 176) 00:30:39.448 7864.320 - 7914.732: 69.3669% ( 180) 00:30:39.448 7914.732 - 7965.145: 70.3413% ( 169) 00:30:39.448 7965.145 - 8015.557: 71.3792% ( 180) 00:30:39.448 8015.557 - 8065.969: 72.3997% ( 177) 00:30:39.448 8065.969 - 8116.382: 73.4029% ( 174) 00:30:39.448 8116.382 - 8166.794: 74.4696% ( 185) 00:30:39.448 8166.794 - 8217.206: 75.5016% ( 179) 00:30:39.448 8217.206 - 8267.618: 76.4587% ( 166) 00:30:39.448 8267.618 - 8318.031: 77.4965% ( 180) 00:30:39.448 8318.031 - 8368.443: 78.5113% ( 176) 00:30:39.448 8368.443 - 8418.855: 79.5203% ( 175) 00:30:39.448 8418.855 - 8469.268: 80.4774% ( 166) 00:30:39.448 8469.268 - 8519.680: 81.4749% ( 173) 00:30:39.448 8519.680 - 8570.092: 82.4839% ( 175) 00:30:39.448 8570.092 - 8620.505: 83.4525% ( 168) 00:30:39.448 8620.505 - 8670.917: 84.3981% ( 164) 00:30:39.448 8670.917 - 8721.329: 85.2514% ( 148) 00:30:39.448 8721.329 - 8771.742: 86.0182% ( 133) 00:30:39.448 8771.742 - 8822.154: 86.7908% ( 134) 00:30:39.448 8822.154 - 8872.566: 87.5750% ( 136) 00:30:39.448 8872.566 - 8922.978: 88.2899% ( 124) 00:30:39.448 8922.978 - 8973.391: 89.0048% ( 124) 00:30:39.448 8973.391 - 9023.803: 89.6737% ( 116) 00:30:39.448 9023.803 - 9074.215: 90.2618% ( 102) 00:30:39.448 9074.215 - 9124.628: 90.8556% ( 103) 00:30:39.448 9124.628 - 9175.040: 91.3284% ( 82) 00:30:39.448 9175.040 - 9225.452: 91.9107% ( 101) 00:30:39.448 9225.452 - 9275.865: 92.3086% ( 69) 00:30:39.448 9275.865 - 9326.277: 92.7583% ( 78) 00:30:39.448 9326.277 - 9376.689: 93.1158% ( 62) 00:30:39.448 9376.689 - 9427.102: 93.5309% ( 72) 00:30:39.448 9427.102 - 9477.514: 93.8423% ( 54) 00:30:39.448 9477.514 - 9527.926: 94.0671% ( 39) 00:30:39.448 9527.926 - 9578.338: 94.3323% ( 46) 00:30:39.448 9578.338 - 9628.751: 94.5457% ( 37) 00:30:39.448 9628.751 - 9679.163: 94.7532% ( 36) 00:30:39.448 9679.163 - 9729.575: 94.9666% ( 37) 00:30:39.448 9729.575 - 9779.988: 95.1914% ( 39) 00:30:39.448 9779.988 - 9830.400: 95.4163% ( 39) 00:30:39.448 9830.400 - 9880.812: 95.5835% ( 29) 00:30:39.448 9880.812 - 9931.225: 95.8141% ( 40) 00:30:39.448 9931.225 - 9981.637: 96.0447% ( 40) 00:30:39.448 9981.637 - 10032.049: 96.2581% ( 37) 00:30:39.448 10032.049 - 10082.462: 96.4310% ( 30) 00:30:39.448 10082.462 - 10132.874: 96.6271% ( 34) 00:30:39.448 10132.874 - 10183.286: 96.7943% ( 29) 00:30:39.448 10183.286 - 10233.698: 96.9557% ( 28) 00:30:39.448 10233.698 - 10284.111: 97.1114% ( 27) 00:30:39.448 10284.111 - 10334.523: 97.2267% ( 20) 00:30:39.448 10334.523 - 10384.935: 97.3190% ( 16) 00:30:39.448 10384.935 - 10435.348: 97.4400% ( 21) 00:30:39.448 10435.348 - 10485.760: 97.5381% ( 17) 00:30:39.448 10485.760 - 10536.172: 97.6418% ( 18) 00:30:39.448 10536.172 - 10586.585: 97.7514% ( 19) 00:30:39.448 10586.585 - 10636.997: 97.8379% ( 15) 00:30:39.448 10636.997 - 10687.409: 97.9705% ( 23) 00:30:39.448 10687.409 - 10737.822: 98.0339% ( 11) 00:30:39.448 10737.822 - 10788.234: 98.1262% ( 16) 00:30:39.448 10788.234 - 10838.646: 98.1953% ( 12) 00:30:39.448 10838.646 - 10889.058: 98.2818% ( 15) 00:30:39.448 10889.058 - 10939.471: 98.3222% ( 7) 00:30:39.448 10939.471 - 10989.883: 98.3914% ( 12) 00:30:39.448 10989.883 - 11040.295: 98.4663% ( 13) 00:30:39.448 11040.295 - 11090.708: 98.5298% ( 11) 00:30:39.448 11090.708 - 11141.120: 98.5932% ( 11) 00:30:39.448 11141.120 - 11191.532: 98.6451% ( 9) 00:30:39.448 11191.532 - 11241.945: 98.7027% ( 10) 00:30:39.448 11241.945 - 11292.357: 98.7373% ( 6) 00:30:39.448 11292.357 - 11342.769: 98.7892% ( 9) 00:30:39.448 11342.769 - 11393.182: 98.8411% ( 9) 00:30:39.448 11393.182 - 11443.594: 98.8815% ( 7) 00:30:39.448 11443.594 - 11494.006: 98.9333% ( 9) 00:30:39.448 11494.006 - 11544.418: 98.9737% ( 7) 00:30:39.448 11544.418 - 11594.831: 99.0198% ( 8) 00:30:39.448 11594.831 - 11645.243: 99.0429% ( 4) 00:30:39.448 11645.243 - 11695.655: 99.0660% ( 4) 00:30:39.448 11695.655 - 11746.068: 99.0890% ( 4) 00:30:39.448 11746.068 - 11796.480: 99.1179% ( 5) 00:30:39.448 11796.480 - 11846.892: 99.1409% ( 4) 00:30:39.448 11846.892 - 11897.305: 99.1582% ( 3) 00:30:39.448 11897.305 - 11947.717: 99.1697% ( 2) 00:30:39.448 11947.717 - 11998.129: 99.1813% ( 2) 00:30:39.448 11998.129 - 12048.542: 99.1986% ( 3) 00:30:39.448 12048.542 - 12098.954: 99.2043% ( 1) 00:30:39.448 12098.954 - 12149.366: 99.2159% ( 2) 00:30:39.448 12149.366 - 12199.778: 99.2332% ( 3) 00:30:39.448 12199.778 - 12250.191: 99.2447% ( 2) 00:30:39.448 12250.191 - 12300.603: 99.2562% ( 2) 00:30:39.449 12300.603 - 12351.015: 99.2620% ( 1) 00:30:39.449 17644.308 - 17745.132: 99.2735% ( 2) 00:30:39.449 17745.132 - 17845.957: 99.2851% ( 2) 00:30:39.449 17845.957 - 17946.782: 99.3081% ( 4) 00:30:39.449 17946.782 - 18047.606: 99.3139% ( 1) 00:30:39.449 18047.606 - 18148.431: 99.3312% ( 3) 00:30:39.449 18148.431 - 18249.255: 99.3427% ( 2) 00:30:39.449 18249.255 - 18350.080: 99.3600% ( 3) 00:30:39.449 18350.080 - 18450.905: 99.3773% ( 3) 00:30:39.449 18450.905 - 18551.729: 99.3888% ( 2) 00:30:39.449 18551.729 - 18652.554: 99.4061% ( 3) 00:30:39.449 18652.554 - 18753.378: 99.4234% ( 3) 00:30:39.449 18753.378 - 18854.203: 99.4292% ( 1) 00:30:39.449 18854.203 - 18955.028: 99.4407% ( 2) 00:30:39.449 18955.028 - 19055.852: 99.4580% ( 3) 00:30:39.449 19055.852 - 19156.677: 99.4753% ( 3) 00:30:39.449 19156.677 - 19257.502: 99.4811% ( 1) 00:30:39.449 19257.502 - 19358.326: 99.4926% ( 2) 00:30:39.449 19358.326 - 19459.151: 99.5099% ( 3) 00:30:39.449 19459.151 - 19559.975: 99.5272% ( 3) 00:30:39.449 19559.975 - 19660.800: 99.5387% ( 2) 00:30:39.449 19660.800 - 19761.625: 99.5503% ( 2) 00:30:39.449 19761.625 - 19862.449: 99.5733% ( 4) 00:30:39.449 19862.449 - 19963.274: 99.5849% ( 2) 00:30:39.449 19963.274 - 20064.098: 99.5964% ( 2) 00:30:39.449 20064.098 - 20164.923: 99.6195% ( 4) 00:30:39.449 20164.923 - 20265.748: 99.6310% ( 2) 00:30:39.449 24399.557 - 24500.382: 99.6425% ( 2) 00:30:39.449 24500.382 - 24601.206: 99.6598% ( 3) 00:30:39.449 24601.206 - 24702.031: 99.6714% ( 2) 00:30:39.449 24702.031 - 24802.855: 99.6944% ( 4) 00:30:39.449 24802.855 - 24903.680: 99.7117% ( 3) 00:30:39.449 24903.680 - 25004.505: 99.7348% ( 4) 00:30:39.449 25004.505 - 25105.329: 99.7463% ( 2) 00:30:39.449 25105.329 - 25206.154: 99.7636% ( 3) 00:30:39.449 25206.154 - 25306.978: 99.7809% ( 3) 00:30:39.449 25306.978 - 25407.803: 99.7982% ( 3) 00:30:39.449 25407.803 - 25508.628: 99.8155% ( 3) 00:30:39.449 25508.628 - 25609.452: 99.8386% ( 4) 00:30:39.449 25609.452 - 25710.277: 99.8616% ( 4) 00:30:39.449 25710.277 - 25811.102: 99.8789% ( 3) 00:30:39.449 25811.102 - 26012.751: 99.9193% ( 7) 00:30:39.449 26012.751 - 26214.400: 99.9596% ( 7) 00:30:39.449 26214.400 - 26416.049: 99.9942% ( 6) 00:30:39.449 26416.049 - 26617.698: 100.0000% ( 1) 00:30:39.449 00:30:39.449 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:39.449 ============================================================================== 00:30:39.449 Range in us Cumulative IO count 00:30:39.449 5570.560 - 5595.766: 0.0173% ( 3) 00:30:39.449 5595.766 - 5620.972: 0.0461% ( 5) 00:30:39.449 5620.972 - 5646.178: 0.0577% ( 2) 00:30:39.449 5646.178 - 5671.385: 0.0692% ( 2) 00:30:39.449 5671.385 - 5696.591: 0.0750% ( 1) 00:30:39.449 5721.797 - 5747.003: 0.0807% ( 1) 00:30:39.449 5747.003 - 5772.209: 0.1038% ( 4) 00:30:39.449 5772.209 - 5797.415: 0.1384% ( 6) 00:30:39.449 5797.415 - 5822.622: 0.2018% ( 11) 00:30:39.449 5822.622 - 5847.828: 0.3863% ( 32) 00:30:39.449 5847.828 - 5873.034: 0.6342% ( 43) 00:30:39.449 5873.034 - 5898.240: 0.9225% ( 50) 00:30:39.449 5898.240 - 5923.446: 1.3261% ( 70) 00:30:39.449 5923.446 - 5948.652: 1.8104% ( 84) 00:30:39.449 5948.652 - 5973.858: 2.4965% ( 119) 00:30:39.449 5973.858 - 5999.065: 3.3902% ( 155) 00:30:39.449 5999.065 - 6024.271: 4.2954% ( 157) 00:30:39.449 6024.271 - 6049.477: 5.3217% ( 178) 00:30:39.449 6049.477 - 6074.683: 6.5095% ( 206) 00:30:39.449 6074.683 - 6099.889: 7.7606% ( 217) 00:30:39.449 6099.889 - 6125.095: 9.1213% ( 236) 00:30:39.449 6125.095 - 6150.302: 10.5166% ( 242) 00:30:39.449 6150.302 - 6175.508: 11.8773% ( 236) 00:30:39.449 6175.508 - 6200.714: 13.3591% ( 257) 00:30:39.449 6200.714 - 6225.920: 14.9504% ( 276) 00:30:39.449 6225.920 - 6251.126: 16.4668% ( 263) 00:30:39.449 6251.126 - 6276.332: 17.9774% ( 262) 00:30:39.449 6276.332 - 6301.538: 19.6206% ( 285) 00:30:39.449 6301.538 - 6326.745: 21.2523% ( 283) 00:30:39.449 6326.745 - 6351.951: 22.8898% ( 284) 00:30:39.449 6351.951 - 6377.157: 24.4465% ( 270) 00:30:39.449 6377.157 - 6402.363: 26.0955% ( 286) 00:30:39.449 6402.363 - 6427.569: 27.7099% ( 280) 00:30:39.449 6427.569 - 6452.775: 29.3819% ( 290) 00:30:39.449 6452.775 - 6503.188: 32.7030% ( 576) 00:30:39.449 6503.188 - 6553.600: 36.0298% ( 577) 00:30:39.449 6553.600 - 6604.012: 39.2758% ( 563) 00:30:39.449 6604.012 - 6654.425: 42.4066% ( 543) 00:30:39.449 6654.425 - 6704.837: 45.2779% ( 498) 00:30:39.449 6704.837 - 6755.249: 47.8090% ( 439) 00:30:39.449 6755.249 - 6805.662: 49.9423% ( 370) 00:30:39.449 6805.662 - 6856.074: 51.7182% ( 308) 00:30:39.449 6856.074 - 6906.486: 53.1423% ( 247) 00:30:39.449 6906.486 - 6956.898: 54.3127% ( 203) 00:30:39.449 6956.898 - 7007.311: 55.2583% ( 164) 00:30:39.449 7007.311 - 7057.723: 56.0655% ( 140) 00:30:39.449 7057.723 - 7108.135: 56.7977% ( 127) 00:30:39.449 7108.135 - 7158.548: 57.4147% ( 107) 00:30:39.449 7158.548 - 7208.960: 57.9739% ( 97) 00:30:39.449 7208.960 - 7259.372: 58.5101% ( 93) 00:30:39.449 7259.372 - 7309.785: 59.0348% ( 91) 00:30:39.449 7309.785 - 7360.197: 59.5710% ( 93) 00:30:39.449 7360.197 - 7410.609: 60.1764% ( 105) 00:30:39.449 7410.609 - 7461.022: 60.9663% ( 137) 00:30:39.449 7461.022 - 7511.434: 61.7043% ( 128) 00:30:39.449 7511.434 - 7561.846: 62.3328% ( 109) 00:30:39.449 7561.846 - 7612.258: 63.0304% ( 121) 00:30:39.449 7612.258 - 7662.671: 63.7569% ( 126) 00:30:39.449 7662.671 - 7713.083: 64.5295% ( 134) 00:30:39.449 7713.083 - 7763.495: 65.3252% ( 138) 00:30:39.449 7763.495 - 7813.908: 66.3169% ( 172) 00:30:39.449 7813.908 - 7864.320: 67.2798% ( 167) 00:30:39.449 7864.320 - 7914.732: 68.3060% ( 178) 00:30:39.449 7914.732 - 7965.145: 69.3900% ( 188) 00:30:39.449 7965.145 - 8015.557: 70.5143% ( 195) 00:30:39.449 8015.557 - 8065.969: 71.7770% ( 219) 00:30:39.449 8065.969 - 8116.382: 73.0224% ( 216) 00:30:39.449 8116.382 - 8166.794: 74.1467% ( 195) 00:30:39.449 8166.794 - 8217.206: 75.3171% ( 203) 00:30:39.449 8217.206 - 8267.618: 76.4933% ( 204) 00:30:39.449 8267.618 - 8318.031: 77.6119% ( 194) 00:30:39.449 8318.031 - 8368.443: 78.7016% ( 189) 00:30:39.449 8368.443 - 8418.855: 79.7567% ( 183) 00:30:39.449 8418.855 - 8469.268: 80.8637% ( 192) 00:30:39.449 8469.268 - 8519.680: 81.8612% ( 173) 00:30:39.449 8519.680 - 8570.092: 82.8702% ( 175) 00:30:39.449 8570.092 - 8620.505: 83.9656% ( 190) 00:30:39.449 8620.505 - 8670.917: 84.9343% ( 168) 00:30:39.449 8670.917 - 8721.329: 85.8741% ( 163) 00:30:39.449 8721.329 - 8771.742: 86.8254% ( 165) 00:30:39.449 8771.742 - 8822.154: 87.7537% ( 161) 00:30:39.449 8822.154 - 8872.566: 88.5782% ( 143) 00:30:39.449 8872.566 - 8922.978: 89.3104% ( 127) 00:30:39.449 8922.978 - 8973.391: 89.9908% ( 118) 00:30:39.449 8973.391 - 9023.803: 90.5789% ( 102) 00:30:39.449 9023.803 - 9074.215: 91.1093% ( 92) 00:30:39.449 9074.215 - 9124.628: 91.5706% ( 80) 00:30:39.449 9124.628 - 9175.040: 92.0203% ( 78) 00:30:39.449 9175.040 - 9225.452: 92.4815% ( 80) 00:30:39.450 9225.452 - 9275.865: 92.8909% ( 71) 00:30:39.450 9275.865 - 9326.277: 93.2369% ( 60) 00:30:39.450 9326.277 - 9376.689: 93.5367% ( 52) 00:30:39.450 9376.689 - 9427.102: 93.8192% ( 49) 00:30:39.450 9427.102 - 9477.514: 94.0786% ( 45) 00:30:39.450 9477.514 - 9527.926: 94.3381% ( 45) 00:30:39.450 9527.926 - 9578.338: 94.5630% ( 39) 00:30:39.450 9578.338 - 9628.751: 94.7705% ( 36) 00:30:39.450 9628.751 - 9679.163: 94.9666% ( 34) 00:30:39.450 9679.163 - 9729.575: 95.1568% ( 33) 00:30:39.450 9729.575 - 9779.988: 95.3471% ( 33) 00:30:39.450 9779.988 - 9830.400: 95.5258% ( 31) 00:30:39.450 9830.400 - 9880.812: 95.6815% ( 27) 00:30:39.450 9880.812 - 9931.225: 95.8429% ( 28) 00:30:39.450 9931.225 - 9981.637: 95.9986% ( 27) 00:30:39.450 9981.637 - 10032.049: 96.1370% ( 24) 00:30:39.450 10032.049 - 10082.462: 96.2927% ( 27) 00:30:39.450 10082.462 - 10132.874: 96.4483% ( 27) 00:30:39.450 10132.874 - 10183.286: 96.6098% ( 28) 00:30:39.450 10183.286 - 10233.698: 96.7827% ( 30) 00:30:39.450 10233.698 - 10284.111: 96.9500% ( 29) 00:30:39.450 10284.111 - 10334.523: 97.1287% ( 31) 00:30:39.450 10334.523 - 10384.935: 97.3017% ( 30) 00:30:39.450 10384.935 - 10435.348: 97.4689% ( 29) 00:30:39.450 10435.348 - 10485.760: 97.6188% ( 26) 00:30:39.450 10485.760 - 10536.172: 97.7399% ( 21) 00:30:39.450 10536.172 - 10586.585: 97.8494% ( 19) 00:30:39.450 10586.585 - 10636.997: 97.9474% ( 17) 00:30:39.450 10636.997 - 10687.409: 98.0685% ( 21) 00:30:39.450 10687.409 - 10737.822: 98.1607% ( 16) 00:30:39.450 10737.822 - 10788.234: 98.2645% ( 18) 00:30:39.450 10788.234 - 10838.646: 98.3683% ( 18) 00:30:39.450 10838.646 - 10889.058: 98.4548% ( 15) 00:30:39.450 10889.058 - 10939.471: 98.4894% ( 6) 00:30:39.450 10939.471 - 10989.883: 98.5182% ( 5) 00:30:39.450 10989.883 - 11040.295: 98.5413% ( 4) 00:30:39.450 11040.295 - 11090.708: 98.5701% ( 5) 00:30:39.450 11090.708 - 11141.120: 98.5989% ( 5) 00:30:39.450 11141.120 - 11191.532: 98.6220% ( 4) 00:30:39.450 11191.532 - 11241.945: 98.6393% ( 3) 00:30:39.450 11241.945 - 11292.357: 98.6624% ( 4) 00:30:39.450 11292.357 - 11342.769: 98.6797% ( 3) 00:30:39.450 11342.769 - 11393.182: 98.6970% ( 3) 00:30:39.450 11393.182 - 11443.594: 98.7143% ( 3) 00:30:39.450 11443.594 - 11494.006: 98.7488% ( 6) 00:30:39.450 11494.006 - 11544.418: 98.7892% ( 7) 00:30:39.450 11544.418 - 11594.831: 98.8123% ( 4) 00:30:39.450 11594.831 - 11645.243: 98.8526% ( 7) 00:30:39.450 11645.243 - 11695.655: 98.8815% ( 5) 00:30:39.450 11695.655 - 11746.068: 98.9103% ( 5) 00:30:39.450 11746.068 - 11796.480: 98.9333% ( 4) 00:30:39.450 11796.480 - 11846.892: 98.9622% ( 5) 00:30:39.450 11846.892 - 11897.305: 98.9910% ( 5) 00:30:39.450 11897.305 - 11947.717: 99.0141% ( 4) 00:30:39.450 11947.717 - 11998.129: 99.0314% ( 3) 00:30:39.450 11998.129 - 12048.542: 99.0544% ( 4) 00:30:39.450 12048.542 - 12098.954: 99.0717% ( 3) 00:30:39.450 12098.954 - 12149.366: 99.0948% ( 4) 00:30:39.450 12149.366 - 12199.778: 99.1063% ( 2) 00:30:39.450 12199.778 - 12250.191: 99.1179% ( 2) 00:30:39.450 12250.191 - 12300.603: 99.1351% ( 3) 00:30:39.450 12300.603 - 12351.015: 99.1409% ( 1) 00:30:39.450 12351.015 - 12401.428: 99.1524% ( 2) 00:30:39.450 12401.428 - 12451.840: 99.1640% ( 2) 00:30:39.450 12451.840 - 12502.252: 99.1813% ( 3) 00:30:39.450 12502.252 - 12552.665: 99.1928% ( 2) 00:30:39.450 12552.665 - 12603.077: 99.2043% ( 2) 00:30:39.450 12603.077 - 12653.489: 99.2216% ( 3) 00:30:39.450 12653.489 - 12703.902: 99.2389% ( 3) 00:30:39.450 12703.902 - 12754.314: 99.2505% ( 2) 00:30:39.450 12754.314 - 12804.726: 99.2620% ( 2) 00:30:39.450 16938.535 - 17039.360: 99.2678% ( 1) 00:30:39.450 17039.360 - 17140.185: 99.2793% ( 2) 00:30:39.450 17140.185 - 17241.009: 99.2966% ( 3) 00:30:39.450 17241.009 - 17341.834: 99.3081% ( 2) 00:30:39.450 17341.834 - 17442.658: 99.3139% ( 1) 00:30:39.450 17442.658 - 17543.483: 99.3312% ( 3) 00:30:39.450 17543.483 - 17644.308: 99.3485% ( 3) 00:30:39.450 17644.308 - 17745.132: 99.3658% ( 3) 00:30:39.450 17745.132 - 17845.957: 99.3831% ( 3) 00:30:39.450 17845.957 - 17946.782: 99.4004% ( 3) 00:30:39.450 17946.782 - 18047.606: 99.4177% ( 3) 00:30:39.450 18047.606 - 18148.431: 99.4292% ( 2) 00:30:39.450 18148.431 - 18249.255: 99.4465% ( 3) 00:30:39.450 18249.255 - 18350.080: 99.4638% ( 3) 00:30:39.450 18350.080 - 18450.905: 99.4811% ( 3) 00:30:39.450 18450.905 - 18551.729: 99.4984% ( 3) 00:30:39.450 18551.729 - 18652.554: 99.5099% ( 2) 00:30:39.450 18652.554 - 18753.378: 99.5330% ( 4) 00:30:39.450 18753.378 - 18854.203: 99.5503% ( 3) 00:30:39.450 18854.203 - 18955.028: 99.5733% ( 4) 00:30:39.450 18955.028 - 19055.852: 99.5906% ( 3) 00:30:39.450 19055.852 - 19156.677: 99.6137% ( 4) 00:30:39.450 19156.677 - 19257.502: 99.6310% ( 3) 00:30:39.450 23088.837 - 23189.662: 99.6368% ( 1) 00:30:39.450 23189.662 - 23290.486: 99.6541% ( 3) 00:30:39.450 23290.486 - 23391.311: 99.6714% ( 3) 00:30:39.450 23391.311 - 23492.135: 99.6944% ( 4) 00:30:39.450 23492.135 - 23592.960: 99.7175% ( 4) 00:30:39.450 23592.960 - 23693.785: 99.7348% ( 3) 00:30:39.450 23693.785 - 23794.609: 99.7578% ( 4) 00:30:39.450 23794.609 - 23895.434: 99.7809% ( 4) 00:30:39.450 23895.434 - 23996.258: 99.7982% ( 3) 00:30:39.450 23996.258 - 24097.083: 99.8213% ( 4) 00:30:39.450 24097.083 - 24197.908: 99.8443% ( 4) 00:30:39.450 24197.908 - 24298.732: 99.8616% ( 3) 00:30:39.450 24298.732 - 24399.557: 99.8847% ( 4) 00:30:39.450 24399.557 - 24500.382: 99.9020% ( 3) 00:30:39.450 24500.382 - 24601.206: 99.9250% ( 4) 00:30:39.450 24601.206 - 24702.031: 99.9481% ( 4) 00:30:39.450 24702.031 - 24802.855: 99.9654% ( 3) 00:30:39.450 24802.855 - 24903.680: 99.9885% ( 4) 00:30:39.450 24903.680 - 25004.505: 100.0000% ( 2) 00:30:39.450 00:30:39.450 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:39.450 ============================================================================== 00:30:39.450 Range in us Cumulative IO count 00:30:39.450 5747.003 - 5772.209: 0.0288% ( 5) 00:30:39.450 5772.209 - 5797.415: 0.0807% ( 9) 00:30:39.450 5797.415 - 5822.622: 0.1441% ( 11) 00:30:39.450 5822.622 - 5847.828: 0.2249% ( 14) 00:30:39.450 5847.828 - 5873.034: 0.3632% ( 24) 00:30:39.450 5873.034 - 5898.240: 0.5766% ( 37) 00:30:39.450 5898.240 - 5923.446: 0.9340% ( 62) 00:30:39.450 5923.446 - 5948.652: 1.4241% ( 85) 00:30:39.450 5948.652 - 5973.858: 2.1737% ( 130) 00:30:39.450 5973.858 - 5999.065: 3.2518% ( 187) 00:30:39.450 5999.065 - 6024.271: 4.4050% ( 200) 00:30:39.450 6024.271 - 6049.477: 5.4486% ( 181) 00:30:39.450 6049.477 - 6074.683: 6.5556% ( 192) 00:30:39.450 6074.683 - 6099.889: 7.7145% ( 201) 00:30:39.450 6099.889 - 6125.095: 8.9080% ( 207) 00:30:39.450 6125.095 - 6150.302: 10.2975% ( 241) 00:30:39.450 6150.302 - 6175.508: 11.7389% ( 250) 00:30:39.450 6175.508 - 6200.714: 13.2784% ( 267) 00:30:39.450 6200.714 - 6225.920: 14.8293% ( 269) 00:30:39.450 6225.920 - 6251.126: 16.4149% ( 275) 00:30:39.450 6251.126 - 6276.332: 17.9716% ( 270) 00:30:39.450 6276.332 - 6301.538: 19.6033% ( 283) 00:30:39.450 6301.538 - 6326.745: 21.2869% ( 292) 00:30:39.450 6326.745 - 6351.951: 22.9128% ( 282) 00:30:39.450 6351.951 - 6377.157: 24.5560% ( 285) 00:30:39.450 6377.157 - 6402.363: 26.2108% ( 287) 00:30:39.450 6402.363 - 6427.569: 27.8713% ( 288) 00:30:39.450 6427.569 - 6452.775: 29.6010% ( 300) 00:30:39.450 6452.775 - 6503.188: 32.9682% ( 584) 00:30:39.450 6503.188 - 6553.600: 36.4276% ( 600) 00:30:39.450 6553.600 - 6604.012: 39.8178% ( 588) 00:30:39.450 6604.012 - 6654.425: 42.9313% ( 540) 00:30:39.450 6654.425 - 6704.837: 45.8083% ( 499) 00:30:39.450 6704.837 - 6755.249: 48.3568% ( 442) 00:30:39.450 6755.249 - 6805.662: 50.4497% ( 363) 00:30:39.450 6805.662 - 6856.074: 52.1160% ( 289) 00:30:39.450 6856.074 - 6906.486: 53.6554% ( 267) 00:30:39.450 6906.486 - 6956.898: 54.8086% ( 200) 00:30:39.450 6956.898 - 7007.311: 55.7369% ( 161) 00:30:39.450 7007.311 - 7057.723: 56.5210% ( 136) 00:30:39.450 7057.723 - 7108.135: 57.2013% ( 118) 00:30:39.450 7108.135 - 7158.548: 57.8356% ( 110) 00:30:39.450 7158.548 - 7208.960: 58.4237% ( 102) 00:30:39.450 7208.960 - 7259.372: 58.9137% ( 85) 00:30:39.450 7259.372 - 7309.785: 59.3981% ( 84) 00:30:39.450 7309.785 - 7360.197: 59.9227% ( 91) 00:30:39.450 7360.197 - 7410.609: 60.4301% ( 88) 00:30:39.450 7410.609 - 7461.022: 61.0182% ( 102) 00:30:39.450 7461.022 - 7511.434: 61.5487% ( 92) 00:30:39.450 7511.434 - 7561.846: 62.2175% ( 116) 00:30:39.450 7561.846 - 7612.258: 63.0708% ( 148) 00:30:39.450 7612.258 - 7662.671: 63.7396% ( 116) 00:30:39.450 7662.671 - 7713.083: 64.6275% ( 154) 00:30:39.450 7713.083 - 7763.495: 65.4117% ( 136) 00:30:39.450 7763.495 - 7813.908: 66.2246% ( 141) 00:30:39.450 7813.908 - 7864.320: 67.0145% ( 137) 00:30:39.450 7864.320 - 7914.732: 67.9543% ( 163) 00:30:39.450 7914.732 - 7965.145: 68.9460% ( 172) 00:30:39.450 7965.145 - 8015.557: 69.9435% ( 173) 00:30:39.450 8015.557 - 8065.969: 70.9352% ( 172) 00:30:39.450 8065.969 - 8116.382: 72.0999% ( 202) 00:30:39.450 8116.382 - 8166.794: 73.3222% ( 212) 00:30:39.450 8166.794 - 8217.206: 74.5676% ( 216) 00:30:39.450 8217.206 - 8267.618: 75.7553% ( 206) 00:30:39.450 8267.618 - 8318.031: 76.8681% ( 193) 00:30:39.450 8318.031 - 8368.443: 77.9520% ( 188) 00:30:39.450 8368.443 - 8418.855: 79.0821% ( 196) 00:30:39.450 8418.855 - 8469.268: 80.1949% ( 193) 00:30:39.450 8469.268 - 8519.680: 81.2788% ( 188) 00:30:39.450 8519.680 - 8570.092: 82.3916% ( 193) 00:30:39.450 8570.092 - 8620.505: 83.5966% ( 209) 00:30:39.450 8620.505 - 8670.917: 84.7498% ( 200) 00:30:39.450 8670.917 - 8721.329: 85.8164% ( 185) 00:30:39.451 8721.329 - 8771.742: 86.7678% ( 165) 00:30:39.451 8771.742 - 8822.154: 87.5807% ( 141) 00:30:39.451 8822.154 - 8872.566: 88.3476% ( 133) 00:30:39.451 8872.566 - 8922.978: 89.1086% ( 132) 00:30:39.451 8922.978 - 8973.391: 89.8293% ( 125) 00:30:39.451 8973.391 - 9023.803: 90.5097% ( 118) 00:30:39.451 9023.803 - 9074.215: 91.0517% ( 94) 00:30:39.451 9074.215 - 9124.628: 91.5475% ( 86) 00:30:39.451 9124.628 - 9175.040: 91.9857% ( 76) 00:30:39.451 9175.040 - 9225.452: 92.3893% ( 70) 00:30:39.451 9225.452 - 9275.865: 92.7756% ( 67) 00:30:39.451 9275.865 - 9326.277: 93.1619% ( 67) 00:30:39.451 9326.277 - 9376.689: 93.5828% ( 73) 00:30:39.451 9376.689 - 9427.102: 93.9576% ( 65) 00:30:39.451 9427.102 - 9477.514: 94.2862% ( 57) 00:30:39.451 9477.514 - 9527.926: 94.5745% ( 50) 00:30:39.451 9527.926 - 9578.338: 94.8685% ( 51) 00:30:39.451 9578.338 - 9628.751: 95.1626% ( 51) 00:30:39.451 9628.751 - 9679.163: 95.4393% ( 48) 00:30:39.451 9679.163 - 9729.575: 95.6757% ( 41) 00:30:39.451 9729.575 - 9779.988: 95.9064% ( 40) 00:30:39.451 9779.988 - 9830.400: 96.0678% ( 28) 00:30:39.451 9830.400 - 9880.812: 96.2350% ( 29) 00:30:39.451 9880.812 - 9931.225: 96.3792% ( 25) 00:30:39.451 9931.225 - 9981.637: 96.5118% ( 23) 00:30:39.451 9981.637 - 10032.049: 96.6617% ( 26) 00:30:39.451 10032.049 - 10082.462: 96.7770% ( 20) 00:30:39.451 10082.462 - 10132.874: 96.9038% ( 22) 00:30:39.451 10132.874 - 10183.286: 97.0364% ( 23) 00:30:39.451 10183.286 - 10233.698: 97.1518% ( 20) 00:30:39.451 10233.698 - 10284.111: 97.2555% ( 18) 00:30:39.451 10284.111 - 10334.523: 97.3536% ( 17) 00:30:39.451 10334.523 - 10384.935: 97.4631% ( 19) 00:30:39.451 10384.935 - 10435.348: 97.5726% ( 19) 00:30:39.451 10435.348 - 10485.760: 97.6591% ( 15) 00:30:39.451 10485.760 - 10536.172: 97.7399% ( 14) 00:30:39.451 10536.172 - 10586.585: 97.8206% ( 14) 00:30:39.451 10586.585 - 10636.997: 97.8840% ( 11) 00:30:39.451 10636.997 - 10687.409: 97.9647% ( 14) 00:30:39.451 10687.409 - 10737.822: 98.0224% ( 10) 00:30:39.451 10737.822 - 10788.234: 98.0858% ( 11) 00:30:39.451 10788.234 - 10838.646: 98.1319% ( 8) 00:30:39.451 10838.646 - 10889.058: 98.1723% ( 7) 00:30:39.451 10889.058 - 10939.471: 98.2069% ( 6) 00:30:39.451 10939.471 - 10989.883: 98.2472% ( 7) 00:30:39.451 10989.883 - 11040.295: 98.2934% ( 8) 00:30:39.451 11040.295 - 11090.708: 98.3222% ( 5) 00:30:39.451 11090.708 - 11141.120: 98.3741% ( 9) 00:30:39.451 11141.120 - 11191.532: 98.4260% ( 9) 00:30:39.451 11191.532 - 11241.945: 98.4721% ( 8) 00:30:39.451 11241.945 - 11292.357: 98.5182% ( 8) 00:30:39.451 11292.357 - 11342.769: 98.5701% ( 9) 00:30:39.451 11342.769 - 11393.182: 98.6162% ( 8) 00:30:39.451 11393.182 - 11443.594: 98.6624% ( 8) 00:30:39.451 11443.594 - 11494.006: 98.7143% ( 9) 00:30:39.451 11494.006 - 11544.418: 98.7661% ( 9) 00:30:39.451 11544.418 - 11594.831: 98.7892% ( 4) 00:30:39.451 11594.831 - 11645.243: 98.8123% ( 4) 00:30:39.451 11645.243 - 11695.655: 98.8411% ( 5) 00:30:39.451 11695.655 - 11746.068: 98.8642% ( 4) 00:30:39.451 11746.068 - 11796.480: 98.8930% ( 5) 00:30:39.451 11947.717 - 11998.129: 98.9045% ( 2) 00:30:39.451 11998.129 - 12048.542: 98.9276% ( 4) 00:30:39.451 12048.542 - 12098.954: 98.9506% ( 4) 00:30:39.451 12098.954 - 12149.366: 98.9795% ( 5) 00:30:39.451 12149.366 - 12199.778: 99.0025% ( 4) 00:30:39.451 12199.778 - 12250.191: 99.0256% ( 4) 00:30:39.451 12250.191 - 12300.603: 99.0429% ( 3) 00:30:39.451 12300.603 - 12351.015: 99.0602% ( 3) 00:30:39.451 12351.015 - 12401.428: 99.0833% ( 4) 00:30:39.451 12401.428 - 12451.840: 99.1063% ( 4) 00:30:39.451 12451.840 - 12502.252: 99.1294% ( 4) 00:30:39.451 12502.252 - 12552.665: 99.1582% ( 5) 00:30:39.451 12552.665 - 12603.077: 99.1813% ( 4) 00:30:39.451 12603.077 - 12653.489: 99.2043% ( 4) 00:30:39.451 12653.489 - 12703.902: 99.2274% ( 4) 00:30:39.451 12703.902 - 12754.314: 99.2505% ( 4) 00:30:39.451 12754.314 - 12804.726: 99.2620% ( 2) 00:30:39.451 15829.465 - 15930.289: 99.2735% ( 2) 00:30:39.451 15930.289 - 16031.114: 99.2908% ( 3) 00:30:39.451 16031.114 - 16131.938: 99.3081% ( 3) 00:30:39.451 16131.938 - 16232.763: 99.3254% ( 3) 00:30:39.451 16232.763 - 16333.588: 99.3485% ( 4) 00:30:39.451 16333.588 - 16434.412: 99.3658% ( 3) 00:30:39.451 16434.412 - 16535.237: 99.3888% ( 4) 00:30:39.451 16535.237 - 16636.062: 99.4061% ( 3) 00:30:39.451 16636.062 - 16736.886: 99.4292% ( 4) 00:30:39.451 16736.886 - 16837.711: 99.4465% ( 3) 00:30:39.451 16837.711 - 16938.535: 99.4638% ( 3) 00:30:39.451 16938.535 - 17039.360: 99.4869% ( 4) 00:30:39.451 17039.360 - 17140.185: 99.5042% ( 3) 00:30:39.451 17140.185 - 17241.009: 99.5272% ( 4) 00:30:39.451 17241.009 - 17341.834: 99.5445% ( 3) 00:30:39.451 17341.834 - 17442.658: 99.5676% ( 4) 00:30:39.451 17442.658 - 17543.483: 99.5906% ( 4) 00:30:39.451 17543.483 - 17644.308: 99.6022% ( 2) 00:30:39.451 17644.308 - 17745.132: 99.6252% ( 4) 00:30:39.451 17745.132 - 17845.957: 99.6310% ( 1) 00:30:39.451 21677.292 - 21778.117: 99.6368% ( 1) 00:30:39.451 21778.117 - 21878.942: 99.6541% ( 3) 00:30:39.451 21878.942 - 21979.766: 99.6771% ( 4) 00:30:39.451 21979.766 - 22080.591: 99.6944% ( 3) 00:30:39.451 22080.591 - 22181.415: 99.7175% ( 4) 00:30:39.451 22181.415 - 22282.240: 99.7348% ( 3) 00:30:39.451 22282.240 - 22383.065: 99.7578% ( 4) 00:30:39.451 22383.065 - 22483.889: 99.7751% ( 3) 00:30:39.451 22483.889 - 22584.714: 99.7924% ( 3) 00:30:39.451 22584.714 - 22685.538: 99.8155% ( 4) 00:30:39.451 22685.538 - 22786.363: 99.8386% ( 4) 00:30:39.451 22786.363 - 22887.188: 99.8559% ( 3) 00:30:39.451 22887.188 - 22988.012: 99.8789% ( 4) 00:30:39.451 22988.012 - 23088.837: 99.9020% ( 4) 00:30:39.451 23088.837 - 23189.662: 99.9193% ( 3) 00:30:39.451 23189.662 - 23290.486: 99.9423% ( 4) 00:30:39.451 23290.486 - 23391.311: 99.9596% ( 3) 00:30:39.451 23391.311 - 23492.135: 99.9827% ( 4) 00:30:39.451 23492.135 - 23592.960: 100.0000% ( 3) 00:30:39.451 00:30:39.451 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:39.451 ============================================================================== 00:30:39.451 Range in us Cumulative IO count 00:30:39.451 5696.591 - 5721.797: 0.0058% ( 1) 00:30:39.451 5721.797 - 5747.003: 0.0173% ( 2) 00:30:39.451 5747.003 - 5772.209: 0.0404% ( 4) 00:30:39.451 5772.209 - 5797.415: 0.0807% ( 7) 00:30:39.451 5797.415 - 5822.622: 0.1499% ( 12) 00:30:39.451 5822.622 - 5847.828: 0.2652% ( 20) 00:30:39.451 5847.828 - 5873.034: 0.4267% ( 28) 00:30:39.451 5873.034 - 5898.240: 0.6342% ( 36) 00:30:39.451 5898.240 - 5923.446: 1.0205% ( 67) 00:30:39.451 5923.446 - 5948.652: 1.5740% ( 96) 00:30:39.451 5948.652 - 5973.858: 2.3351% ( 132) 00:30:39.451 5973.858 - 5999.065: 3.2057% ( 151) 00:30:39.451 5999.065 - 6024.271: 4.1859% ( 170) 00:30:39.451 6024.271 - 6049.477: 5.2987% ( 193) 00:30:39.451 6049.477 - 6074.683: 6.5959% ( 225) 00:30:39.451 6074.683 - 6099.889: 7.8298% ( 214) 00:30:39.451 6099.889 - 6125.095: 9.1040% ( 221) 00:30:39.451 6125.095 - 6150.302: 10.4128% ( 227) 00:30:39.451 6150.302 - 6175.508: 11.9350% ( 264) 00:30:39.451 6175.508 - 6200.714: 13.4225% ( 258) 00:30:39.451 6200.714 - 6225.920: 14.9562% ( 266) 00:30:39.451 6225.920 - 6251.126: 16.4783% ( 264) 00:30:39.451 6251.126 - 6276.332: 18.0120% ( 266) 00:30:39.451 6276.332 - 6301.538: 19.5918% ( 274) 00:30:39.451 6301.538 - 6326.745: 21.1831% ( 276) 00:30:39.451 6326.745 - 6351.951: 22.8667% ( 292) 00:30:39.451 6351.951 - 6377.157: 24.5387% ( 290) 00:30:39.451 6377.157 - 6402.363: 26.1416% ( 278) 00:30:39.451 6402.363 - 6427.569: 27.7791% ( 284) 00:30:39.451 6427.569 - 6452.775: 29.4799% ( 295) 00:30:39.451 6452.775 - 6503.188: 32.7952% ( 575) 00:30:39.451 6503.188 - 6553.600: 36.0586% ( 566) 00:30:39.451 6553.600 - 6604.012: 39.4084% ( 581) 00:30:39.451 6604.012 - 6654.425: 42.5392% ( 543) 00:30:39.451 6654.425 - 6704.837: 45.5835% ( 528) 00:30:39.451 6704.837 - 6755.249: 48.1665% ( 448) 00:30:39.451 6755.249 - 6805.662: 50.3690% ( 382) 00:30:39.451 6805.662 - 6856.074: 52.1218% ( 304) 00:30:39.452 6856.074 - 6906.486: 53.5632% ( 250) 00:30:39.452 6906.486 - 6956.898: 54.6875% ( 195) 00:30:39.452 6956.898 - 7007.311: 55.6100% ( 160) 00:30:39.452 7007.311 - 7057.723: 56.3999% ( 137) 00:30:39.452 7057.723 - 7108.135: 57.0399% ( 111) 00:30:39.452 7108.135 - 7158.548: 57.5646% ( 91) 00:30:39.452 7158.548 - 7208.960: 58.1181% ( 96) 00:30:39.452 7208.960 - 7259.372: 58.6139% ( 86) 00:30:39.452 7259.372 - 7309.785: 59.1617% ( 95) 00:30:39.452 7309.785 - 7360.197: 59.7325% ( 99) 00:30:39.452 7360.197 - 7410.609: 60.3609% ( 109) 00:30:39.452 7410.609 - 7461.022: 60.9548% ( 103) 00:30:39.452 7461.022 - 7511.434: 61.5314% ( 100) 00:30:39.452 7511.434 - 7561.846: 62.1541% ( 108) 00:30:39.452 7561.846 - 7612.258: 62.8459% ( 120) 00:30:39.452 7612.258 - 7662.671: 63.6128% ( 133) 00:30:39.452 7662.671 - 7713.083: 64.3335% ( 125) 00:30:39.452 7713.083 - 7763.495: 65.2041% ( 151) 00:30:39.452 7763.495 - 7813.908: 65.9940% ( 137) 00:30:39.452 7813.908 - 7864.320: 66.9453% ( 165) 00:30:39.452 7864.320 - 7914.732: 67.9601% ( 176) 00:30:39.452 7914.732 - 7965.145: 68.8999% ( 163) 00:30:39.452 7965.145 - 8015.557: 69.8974% ( 173) 00:30:39.452 8015.557 - 8065.969: 70.9064% ( 175) 00:30:39.452 8065.969 - 8116.382: 71.9557% ( 182) 00:30:39.452 8116.382 - 8166.794: 73.0512% ( 190) 00:30:39.452 8166.794 - 8217.206: 74.2851% ( 214) 00:30:39.452 8217.206 - 8267.618: 75.5708% ( 223) 00:30:39.452 8267.618 - 8318.031: 76.7412% ( 203) 00:30:39.452 8318.031 - 8368.443: 77.9463% ( 209) 00:30:39.452 8368.443 - 8418.855: 79.0879% ( 198) 00:30:39.452 8418.855 - 8469.268: 80.3217% ( 214) 00:30:39.452 8469.268 - 8519.680: 81.5325% ( 210) 00:30:39.452 8519.680 - 8570.092: 82.6453% ( 193) 00:30:39.452 8570.092 - 8620.505: 83.6831% ( 180) 00:30:39.452 8620.505 - 8670.917: 84.6979% ( 176) 00:30:39.452 8670.917 - 8721.329: 85.6896% ( 172) 00:30:39.452 8721.329 - 8771.742: 86.6755% ( 171) 00:30:39.452 8771.742 - 8822.154: 87.5461% ( 151) 00:30:39.452 8822.154 - 8872.566: 88.3937% ( 147) 00:30:39.452 8872.566 - 8922.978: 89.1893% ( 138) 00:30:39.452 8922.978 - 8973.391: 89.8870% ( 121) 00:30:39.452 8973.391 - 9023.803: 90.5212% ( 110) 00:30:39.452 9023.803 - 9074.215: 91.0747% ( 96) 00:30:39.452 9074.215 - 9124.628: 91.6398% ( 98) 00:30:39.452 9124.628 - 9175.040: 92.1471% ( 88) 00:30:39.452 9175.040 - 9225.452: 92.5796% ( 75) 00:30:39.452 9225.452 - 9275.865: 92.9659% ( 67) 00:30:39.452 9275.865 - 9326.277: 93.2945% ( 57) 00:30:39.452 9326.277 - 9376.689: 93.6232% ( 57) 00:30:39.452 9376.689 - 9427.102: 93.9403% ( 55) 00:30:39.452 9427.102 - 9477.514: 94.2747% ( 58) 00:30:39.452 9477.514 - 9527.926: 94.5457% ( 47) 00:30:39.452 9527.926 - 9578.338: 94.8512% ( 53) 00:30:39.452 9578.338 - 9628.751: 95.1280% ( 48) 00:30:39.452 9628.751 - 9679.163: 95.3413% ( 37) 00:30:39.452 9679.163 - 9729.575: 95.5662% ( 39) 00:30:39.452 9729.575 - 9779.988: 95.7738% ( 36) 00:30:39.452 9779.988 - 9830.400: 95.9640% ( 33) 00:30:39.452 9830.400 - 9880.812: 96.1082% ( 25) 00:30:39.452 9880.812 - 9931.225: 96.2235% ( 20) 00:30:39.452 9931.225 - 9981.637: 96.3330% ( 19) 00:30:39.452 9981.637 - 10032.049: 96.4714% ( 24) 00:30:39.452 10032.049 - 10082.462: 96.5982% ( 22) 00:30:39.452 10082.462 - 10132.874: 96.7136% ( 20) 00:30:39.452 10132.874 - 10183.286: 96.8289% ( 20) 00:30:39.452 10183.286 - 10233.698: 96.9500% ( 21) 00:30:39.452 10233.698 - 10284.111: 97.0826% ( 23) 00:30:39.452 10284.111 - 10334.523: 97.1690% ( 15) 00:30:39.452 10334.523 - 10384.935: 97.3074% ( 24) 00:30:39.452 10384.935 - 10435.348: 97.4458% ( 24) 00:30:39.452 10435.348 - 10485.760: 97.5842% ( 24) 00:30:39.452 10485.760 - 10536.172: 97.7053% ( 21) 00:30:39.452 10536.172 - 10586.585: 97.8263% ( 21) 00:30:39.452 10586.585 - 10636.997: 97.9417% ( 20) 00:30:39.452 10636.997 - 10687.409: 98.0454% ( 18) 00:30:39.452 10687.409 - 10737.822: 98.1377% ( 16) 00:30:39.452 10737.822 - 10788.234: 98.2299% ( 16) 00:30:39.452 10788.234 - 10838.646: 98.2991% ( 12) 00:30:39.452 10838.646 - 10889.058: 98.3395% ( 7) 00:30:39.452 10889.058 - 10939.471: 98.3798% ( 7) 00:30:39.452 10939.471 - 10989.883: 98.4317% ( 9) 00:30:39.452 10989.883 - 11040.295: 98.4779% ( 8) 00:30:39.452 11040.295 - 11090.708: 98.5298% ( 9) 00:30:39.452 11090.708 - 11141.120: 98.5816% ( 9) 00:30:39.452 11141.120 - 11191.532: 98.6278% ( 8) 00:30:39.452 11191.532 - 11241.945: 98.6739% ( 8) 00:30:39.452 11241.945 - 11292.357: 98.7027% ( 5) 00:30:39.452 11292.357 - 11342.769: 98.7315% ( 5) 00:30:39.452 11342.769 - 11393.182: 98.7431% ( 2) 00:30:39.452 11393.182 - 11443.594: 98.7546% ( 2) 00:30:39.452 11443.594 - 11494.006: 98.7661% ( 2) 00:30:39.452 11494.006 - 11544.418: 98.7777% ( 2) 00:30:39.452 11544.418 - 11594.831: 98.7892% ( 2) 00:30:39.452 11594.831 - 11645.243: 98.8065% ( 3) 00:30:39.452 11645.243 - 11695.655: 98.8353% ( 5) 00:30:39.452 11695.655 - 11746.068: 98.8584% ( 4) 00:30:39.452 11746.068 - 11796.480: 98.8930% ( 6) 00:30:39.452 11796.480 - 11846.892: 98.9218% ( 5) 00:30:39.452 11846.892 - 11897.305: 98.9449% ( 4) 00:30:39.452 11897.305 - 11947.717: 98.9737% ( 5) 00:30:39.452 11947.717 - 11998.129: 98.9910% ( 3) 00:30:39.452 11998.129 - 12048.542: 99.0141% ( 4) 00:30:39.452 12048.542 - 12098.954: 99.0314% ( 3) 00:30:39.452 12098.954 - 12149.366: 99.0487% ( 3) 00:30:39.452 12149.366 - 12199.778: 99.0602% ( 2) 00:30:39.452 12199.778 - 12250.191: 99.0775% ( 3) 00:30:39.452 12250.191 - 12300.603: 99.0890% ( 2) 00:30:39.452 12300.603 - 12351.015: 99.1063% ( 3) 00:30:39.452 12351.015 - 12401.428: 99.1236% ( 3) 00:30:39.452 12401.428 - 12451.840: 99.1351% ( 2) 00:30:39.452 12451.840 - 12502.252: 99.1524% ( 3) 00:30:39.452 12502.252 - 12552.665: 99.1697% ( 3) 00:30:39.452 12552.665 - 12603.077: 99.1813% ( 2) 00:30:39.452 12603.077 - 12653.489: 99.1986% ( 3) 00:30:39.452 12653.489 - 12703.902: 99.2101% ( 2) 00:30:39.452 12703.902 - 12754.314: 99.2216% ( 2) 00:30:39.452 12754.314 - 12804.726: 99.2389% ( 3) 00:30:39.452 12804.726 - 12855.138: 99.2505% ( 2) 00:30:39.452 12855.138 - 12905.551: 99.2562% ( 1) 00:30:39.452 12905.551 - 13006.375: 99.2620% ( 1) 00:30:39.452 14317.095 - 14417.920: 99.2678% ( 1) 00:30:39.452 14417.920 - 14518.745: 99.2851% ( 3) 00:30:39.452 14518.745 - 14619.569: 99.3024% ( 3) 00:30:39.452 14619.569 - 14720.394: 99.3254% ( 4) 00:30:39.452 14720.394 - 14821.218: 99.3485% ( 4) 00:30:39.452 14821.218 - 14922.043: 99.3658% ( 3) 00:30:39.452 14922.043 - 15022.868: 99.3946% ( 5) 00:30:39.452 15022.868 - 15123.692: 99.4119% ( 3) 00:30:39.452 15123.692 - 15224.517: 99.4292% ( 3) 00:30:39.452 15224.517 - 15325.342: 99.4523% ( 4) 00:30:39.452 15325.342 - 15426.166: 99.4696% ( 3) 00:30:39.452 15426.166 - 15526.991: 99.4926% ( 4) 00:30:39.452 15526.991 - 15627.815: 99.5099% ( 3) 00:30:39.452 15627.815 - 15728.640: 99.5330% ( 4) 00:30:39.452 15728.640 - 15829.465: 99.5560% ( 4) 00:30:39.452 15829.465 - 15930.289: 99.5733% ( 3) 00:30:39.452 15930.289 - 16031.114: 99.5906% ( 3) 00:30:39.452 16031.114 - 16131.938: 99.6137% ( 4) 00:30:39.452 16131.938 - 16232.763: 99.6310% ( 3) 00:30:39.452 20164.923 - 20265.748: 99.6483% ( 3) 00:30:39.452 20265.748 - 20366.572: 99.6714% ( 4) 00:30:39.452 20366.572 - 20467.397: 99.6944% ( 4) 00:30:39.452 20467.397 - 20568.222: 99.7117% ( 3) 00:30:39.452 20568.222 - 20669.046: 99.7290% ( 3) 00:30:39.452 20669.046 - 20769.871: 99.7521% ( 4) 00:30:39.452 20769.871 - 20870.695: 99.7751% ( 4) 00:30:39.452 20870.695 - 20971.520: 99.7924% ( 3) 00:30:39.452 20971.520 - 21072.345: 99.8155% ( 4) 00:30:39.452 21072.345 - 21173.169: 99.8328% ( 3) 00:30:39.453 21173.169 - 21273.994: 99.8559% ( 4) 00:30:39.453 21273.994 - 21374.818: 99.8732% ( 3) 00:30:39.453 21374.818 - 21475.643: 99.8962% ( 4) 00:30:39.453 21475.643 - 21576.468: 99.9135% ( 3) 00:30:39.453 21576.468 - 21677.292: 99.9366% ( 4) 00:30:39.453 21677.292 - 21778.117: 99.9539% ( 3) 00:30:39.453 21778.117 - 21878.942: 99.9769% ( 4) 00:30:39.453 21878.942 - 21979.766: 100.0000% ( 4) 00:30:39.453 00:30:39.453 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:39.453 ============================================================================== 00:30:39.453 Range in us Cumulative IO count 00:30:39.453 5696.591 - 5721.797: 0.0115% ( 2) 00:30:39.453 5721.797 - 5747.003: 0.0404% ( 5) 00:30:39.453 5747.003 - 5772.209: 0.0461% ( 1) 00:30:39.453 5772.209 - 5797.415: 0.0865% ( 7) 00:30:39.453 5797.415 - 5822.622: 0.1499% ( 11) 00:30:39.453 5822.622 - 5847.828: 0.2710% ( 21) 00:30:39.453 5847.828 - 5873.034: 0.5074% ( 41) 00:30:39.453 5873.034 - 5898.240: 0.7784% ( 47) 00:30:39.453 5898.240 - 5923.446: 1.0321% ( 44) 00:30:39.453 5923.446 - 5948.652: 1.4587% ( 74) 00:30:39.453 5948.652 - 5973.858: 2.0756% ( 107) 00:30:39.453 5973.858 - 5999.065: 2.9001% ( 143) 00:30:39.453 5999.065 - 6024.271: 4.0590% ( 201) 00:30:39.453 6024.271 - 6049.477: 5.2237% ( 202) 00:30:39.453 6049.477 - 6074.683: 6.5556% ( 231) 00:30:39.453 6074.683 - 6099.889: 7.6165% ( 184) 00:30:39.453 6099.889 - 6125.095: 8.9368% ( 229) 00:30:39.453 6125.095 - 6150.302: 10.4820% ( 268) 00:30:39.453 6150.302 - 6175.508: 11.8658% ( 240) 00:30:39.453 6175.508 - 6200.714: 13.4629% ( 277) 00:30:39.453 6200.714 - 6225.920: 14.8812% ( 246) 00:30:39.453 6225.920 - 6251.126: 16.3226% ( 250) 00:30:39.453 6251.126 - 6276.332: 17.8333% ( 262) 00:30:39.453 6276.332 - 6301.538: 19.3727% ( 267) 00:30:39.453 6301.538 - 6326.745: 21.0044% ( 283) 00:30:39.453 6326.745 - 6351.951: 22.6534% ( 286) 00:30:39.453 6351.951 - 6377.157: 24.2908% ( 284) 00:30:39.453 6377.157 - 6402.363: 25.8879% ( 277) 00:30:39.453 6402.363 - 6427.569: 27.4965% ( 279) 00:30:39.453 6427.569 - 6452.775: 29.0648% ( 272) 00:30:39.453 6452.775 - 6503.188: 32.2475% ( 552) 00:30:39.453 6503.188 - 6553.600: 35.5166% ( 567) 00:30:39.453 6553.600 - 6604.012: 38.7569% ( 562) 00:30:39.453 6604.012 - 6654.425: 41.9107% ( 547) 00:30:39.453 6654.425 - 6704.837: 44.8801% ( 515) 00:30:39.453 6704.837 - 6755.249: 47.4862% ( 452) 00:30:39.453 6755.249 - 6805.662: 49.7348% ( 390) 00:30:39.453 6805.662 - 6856.074: 51.5279% ( 311) 00:30:39.453 6856.074 - 6906.486: 53.0327% ( 261) 00:30:39.453 6906.486 - 6956.898: 54.2724% ( 215) 00:30:39.453 6956.898 - 7007.311: 55.2468% ( 169) 00:30:39.453 7007.311 - 7057.723: 56.0424% ( 138) 00:30:39.453 7057.723 - 7108.135: 56.8150% ( 134) 00:30:39.453 7108.135 - 7158.548: 57.3974% ( 101) 00:30:39.453 7158.548 - 7208.960: 57.9682% ( 99) 00:30:39.453 7208.960 - 7259.372: 58.5563% ( 102) 00:30:39.453 7259.372 - 7309.785: 59.1847% ( 109) 00:30:39.453 7309.785 - 7360.197: 59.7671% ( 101) 00:30:39.453 7360.197 - 7410.609: 60.3494% ( 101) 00:30:39.453 7410.609 - 7461.022: 60.9260% ( 100) 00:30:39.453 7461.022 - 7511.434: 61.5141% ( 102) 00:30:39.453 7511.434 - 7561.846: 62.2060% ( 120) 00:30:39.453 7561.846 - 7612.258: 62.9094% ( 122) 00:30:39.453 7612.258 - 7662.671: 63.6647% ( 131) 00:30:39.453 7662.671 - 7713.083: 64.5180% ( 148) 00:30:39.453 7713.083 - 7763.495: 65.3655% ( 147) 00:30:39.453 7763.495 - 7813.908: 66.3861% ( 177) 00:30:39.453 7813.908 - 7864.320: 67.2336% ( 147) 00:30:39.453 7864.320 - 7914.732: 68.0754% ( 146) 00:30:39.453 7914.732 - 7965.145: 69.0556% ( 170) 00:30:39.453 7965.145 - 8015.557: 70.0761% ( 177) 00:30:39.453 8015.557 - 8065.969: 71.0217% ( 164) 00:30:39.453 8065.969 - 8116.382: 72.3536% ( 231) 00:30:39.453 8116.382 - 8166.794: 73.5932% ( 215) 00:30:39.453 8166.794 - 8217.206: 74.7521% ( 201) 00:30:39.453 8217.206 - 8267.618: 76.0032% ( 217) 00:30:39.453 8267.618 - 8318.031: 77.0929% ( 189) 00:30:39.453 8318.031 - 8368.443: 78.1596% ( 185) 00:30:39.453 8368.443 - 8418.855: 79.2493% ( 189) 00:30:39.453 8418.855 - 8469.268: 80.3909% ( 198) 00:30:39.453 8469.268 - 8519.680: 81.5325% ( 198) 00:30:39.453 8519.680 - 8570.092: 82.7260% ( 207) 00:30:39.453 8570.092 - 8620.505: 83.8676% ( 198) 00:30:39.453 8620.505 - 8670.917: 84.8939% ( 178) 00:30:39.453 8670.917 - 8721.329: 85.8914% ( 173) 00:30:39.453 8721.329 - 8771.742: 86.7678% ( 152) 00:30:39.453 8771.742 - 8822.154: 87.6326% ( 150) 00:30:39.453 8822.154 - 8872.566: 88.4052% ( 134) 00:30:39.453 8872.566 - 8922.978: 89.1836% ( 135) 00:30:39.453 8922.978 - 8973.391: 89.8639% ( 118) 00:30:39.453 8973.391 - 9023.803: 90.5212% ( 114) 00:30:39.453 9023.803 - 9074.215: 91.0863% ( 98) 00:30:39.453 9074.215 - 9124.628: 91.7493% ( 115) 00:30:39.453 9124.628 - 9175.040: 92.2855% ( 93) 00:30:39.453 9175.040 - 9225.452: 92.7583% ( 82) 00:30:39.453 9225.452 - 9275.865: 93.1734% ( 72) 00:30:39.453 9275.865 - 9326.277: 93.5655% ( 68) 00:30:39.453 9326.277 - 9376.689: 93.9172% ( 61) 00:30:39.453 9376.689 - 9427.102: 94.1940% ( 48) 00:30:39.453 9427.102 - 9477.514: 94.4649% ( 47) 00:30:39.453 9477.514 - 9527.926: 94.7129% ( 43) 00:30:39.453 9527.926 - 9578.338: 94.9377% ( 39) 00:30:39.453 9578.338 - 9628.751: 95.1280% ( 33) 00:30:39.453 9628.751 - 9679.163: 95.3010% ( 30) 00:30:39.453 9679.163 - 9729.575: 95.4624% ( 28) 00:30:39.453 9729.575 - 9779.988: 95.6296% ( 29) 00:30:39.453 9779.988 - 9830.400: 95.8083% ( 31) 00:30:39.453 9830.400 - 9880.812: 95.9929% ( 32) 00:30:39.453 9880.812 - 9931.225: 96.1139% ( 21) 00:30:39.453 9931.225 - 9981.637: 96.2235% ( 19) 00:30:39.453 9981.637 - 10032.049: 96.3388% ( 20) 00:30:39.453 10032.049 - 10082.462: 96.4714% ( 23) 00:30:39.453 10082.462 - 10132.874: 96.5810% ( 19) 00:30:39.453 10132.874 - 10183.286: 96.7020% ( 21) 00:30:39.453 10183.286 - 10233.698: 96.8289% ( 22) 00:30:39.453 10233.698 - 10284.111: 96.9384% ( 19) 00:30:39.453 10284.111 - 10334.523: 97.0422% ( 18) 00:30:39.453 10334.523 - 10384.935: 97.1460% ( 18) 00:30:39.453 10384.935 - 10435.348: 97.2671% ( 21) 00:30:39.453 10435.348 - 10485.760: 97.3708% ( 18) 00:30:39.453 10485.760 - 10536.172: 97.4977% ( 22) 00:30:39.453 10536.172 - 10586.585: 97.6015% ( 18) 00:30:39.453 10586.585 - 10636.997: 97.7053% ( 18) 00:30:39.453 10636.997 - 10687.409: 97.8033% ( 17) 00:30:39.453 10687.409 - 10737.822: 97.8955% ( 16) 00:30:39.453 10737.822 - 10788.234: 97.9993% ( 18) 00:30:39.453 10788.234 - 10838.646: 98.0916% ( 16) 00:30:39.453 10838.646 - 10889.058: 98.1838% ( 16) 00:30:39.453 10889.058 - 10939.471: 98.3049% ( 21) 00:30:39.453 10939.471 - 10989.883: 98.3798% ( 13) 00:30:39.453 10989.883 - 11040.295: 98.4721% ( 16) 00:30:39.453 11040.295 - 11090.708: 98.5413% ( 12) 00:30:39.453 11090.708 - 11141.120: 98.5989% ( 10) 00:30:39.453 11141.120 - 11191.532: 98.6278% ( 5) 00:30:39.453 11191.532 - 11241.945: 98.6393% ( 2) 00:30:39.453 11241.945 - 11292.357: 98.6508% ( 2) 00:30:39.453 11292.357 - 11342.769: 98.6912% ( 7) 00:30:39.453 11342.769 - 11393.182: 98.7431% ( 9) 00:30:39.453 11393.182 - 11443.594: 98.7777% ( 6) 00:30:39.453 11443.594 - 11494.006: 98.7892% ( 2) 00:30:39.453 11494.006 - 11544.418: 98.8123% ( 4) 00:30:39.453 11544.418 - 11594.831: 98.8411% ( 5) 00:30:39.453 11594.831 - 11645.243: 98.8642% ( 4) 00:30:39.453 11645.243 - 11695.655: 98.8988% ( 6) 00:30:39.453 11695.655 - 11746.068: 98.9218% ( 4) 00:30:39.453 11746.068 - 11796.480: 98.9564% ( 6) 00:30:39.453 11796.480 - 11846.892: 98.9795% ( 4) 00:30:39.453 11846.892 - 11897.305: 99.0141% ( 6) 00:30:39.453 11897.305 - 11947.717: 99.0429% ( 5) 00:30:39.453 11947.717 - 11998.129: 99.0660% ( 4) 00:30:39.453 11998.129 - 12048.542: 99.0890% ( 4) 00:30:39.453 12048.542 - 12098.954: 99.1121% ( 4) 00:30:39.453 12098.954 - 12149.366: 99.1351% ( 4) 00:30:39.453 12149.366 - 12199.778: 99.1582% ( 4) 00:30:39.453 12199.778 - 12250.191: 99.1813% ( 4) 00:30:39.453 12250.191 - 12300.603: 99.2043% ( 4) 00:30:39.453 12300.603 - 12351.015: 99.2159% ( 2) 00:30:39.453 12351.015 - 12401.428: 99.2332% ( 3) 00:30:39.453 12401.428 - 12451.840: 99.2447% ( 2) 00:30:39.454 12451.840 - 12502.252: 99.2562% ( 2) 00:30:39.454 12502.252 - 12552.665: 99.2620% ( 1) 00:30:39.454 12754.314 - 12804.726: 99.2678% ( 1) 00:30:39.454 12804.726 - 12855.138: 99.2793% ( 2) 00:30:39.454 12855.138 - 12905.551: 99.2851% ( 1) 00:30:39.454 12905.551 - 13006.375: 99.3081% ( 4) 00:30:39.454 13006.375 - 13107.200: 99.3312% ( 4) 00:30:39.454 13107.200 - 13208.025: 99.3485% ( 3) 00:30:39.454 13208.025 - 13308.849: 99.3773% ( 5) 00:30:39.454 13308.849 - 13409.674: 99.3946% ( 3) 00:30:39.454 13409.674 - 13510.498: 99.4177% ( 4) 00:30:39.454 13510.498 - 13611.323: 99.4350% ( 3) 00:30:39.454 13611.323 - 13712.148: 99.4580% ( 4) 00:30:39.454 13712.148 - 13812.972: 99.4753% ( 3) 00:30:39.454 13812.972 - 13913.797: 99.4984% ( 4) 00:30:39.454 13913.797 - 14014.622: 99.5099% ( 2) 00:30:39.454 14014.622 - 14115.446: 99.5272% ( 3) 00:30:39.454 14115.446 - 14216.271: 99.5503% ( 4) 00:30:39.454 14216.271 - 14317.095: 99.5733% ( 4) 00:30:39.454 14317.095 - 14417.920: 99.5906% ( 3) 00:30:39.454 14417.920 - 14518.745: 99.6137% ( 4) 00:30:39.454 14518.745 - 14619.569: 99.6310% ( 3) 00:30:39.454 18551.729 - 18652.554: 99.6541% ( 4) 00:30:39.454 18652.554 - 18753.378: 99.6714% ( 3) 00:30:39.454 18753.378 - 18854.203: 99.6944% ( 4) 00:30:39.454 18854.203 - 18955.028: 99.7060% ( 2) 00:30:39.454 18955.028 - 19055.852: 99.7232% ( 3) 00:30:39.454 19055.852 - 19156.677: 99.7405% ( 3) 00:30:39.454 19156.677 - 19257.502: 99.7636% ( 4) 00:30:39.454 19257.502 - 19358.326: 99.7809% ( 3) 00:30:39.454 19358.326 - 19459.151: 99.7982% ( 3) 00:30:39.454 19459.151 - 19559.975: 99.8155% ( 3) 00:30:39.454 19559.975 - 19660.800: 99.8386% ( 4) 00:30:39.454 19660.800 - 19761.625: 99.8559% ( 3) 00:30:39.454 19761.625 - 19862.449: 99.8789% ( 4) 00:30:39.454 19862.449 - 19963.274: 99.8962% ( 3) 00:30:39.454 19963.274 - 20064.098: 99.9193% ( 4) 00:30:39.454 20064.098 - 20164.923: 99.9423% ( 4) 00:30:39.454 20164.923 - 20265.748: 99.9654% ( 4) 00:30:39.454 20265.748 - 20366.572: 99.9827% ( 3) 00:30:39.454 20366.572 - 20467.397: 100.0000% ( 3) 00:30:39.454 00:30:39.454 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:39.454 ============================================================================== 00:30:39.454 Range in us Cumulative IO count 00:30:39.454 5772.209 - 5797.415: 0.0173% ( 3) 00:30:39.454 5797.415 - 5822.622: 0.0923% ( 13) 00:30:39.454 5822.622 - 5847.828: 0.2306% ( 24) 00:30:39.454 5847.828 - 5873.034: 0.3517% ( 21) 00:30:39.454 5873.034 - 5898.240: 0.5881% ( 41) 00:30:39.454 5898.240 - 5923.446: 0.8533% ( 46) 00:30:39.454 5923.446 - 5948.652: 1.3261% ( 82) 00:30:39.454 5948.652 - 5973.858: 1.9430% ( 107) 00:30:39.454 5973.858 - 5999.065: 2.6983% ( 131) 00:30:39.454 5999.065 - 6024.271: 3.9495% ( 217) 00:30:39.454 6024.271 - 6049.477: 5.0161% ( 185) 00:30:39.454 6049.477 - 6074.683: 6.1750% ( 201) 00:30:39.454 6074.683 - 6099.889: 7.5242% ( 234) 00:30:39.454 6099.889 - 6125.095: 8.8676% ( 233) 00:30:39.454 6125.095 - 6150.302: 10.2514% ( 240) 00:30:39.454 6150.302 - 6175.508: 11.6928% ( 250) 00:30:39.454 6175.508 - 6200.714: 13.1861% ( 259) 00:30:39.454 6200.714 - 6225.920: 14.7486% ( 271) 00:30:39.454 6225.920 - 6251.126: 16.3169% ( 272) 00:30:39.454 6251.126 - 6276.332: 17.8275% ( 262) 00:30:39.454 6276.332 - 6301.538: 19.3496% ( 264) 00:30:39.454 6301.538 - 6326.745: 21.0044% ( 287) 00:30:39.454 6326.745 - 6351.951: 22.5842% ( 274) 00:30:39.454 6351.951 - 6377.157: 24.1179% ( 266) 00:30:39.454 6377.157 - 6402.363: 25.7149% ( 277) 00:30:39.454 6402.363 - 6427.569: 27.3063% ( 276) 00:30:39.454 6427.569 - 6452.775: 28.8976% ( 276) 00:30:39.454 6452.775 - 6503.188: 32.1667% ( 567) 00:30:39.454 6503.188 - 6553.600: 35.4647% ( 572) 00:30:39.454 6553.600 - 6604.012: 38.7281% ( 566) 00:30:39.454 6604.012 - 6654.425: 41.9280% ( 555) 00:30:39.454 6654.425 - 6704.837: 44.9896% ( 531) 00:30:39.454 6704.837 - 6755.249: 47.6188% ( 456) 00:30:39.454 6755.249 - 6805.662: 49.8097% ( 380) 00:30:39.454 6805.662 - 6856.074: 51.6086% ( 312) 00:30:39.454 6856.074 - 6906.486: 52.9809% ( 238) 00:30:39.454 6906.486 - 6956.898: 54.1455% ( 202) 00:30:39.454 6956.898 - 7007.311: 55.0911% ( 164) 00:30:39.454 7007.311 - 7057.723: 55.9156% ( 143) 00:30:39.454 7057.723 - 7108.135: 56.5844% ( 116) 00:30:39.454 7108.135 - 7158.548: 57.1898% ( 105) 00:30:39.454 7158.548 - 7208.960: 57.8759% ( 119) 00:30:39.454 7208.960 - 7259.372: 58.5101% ( 110) 00:30:39.454 7259.372 - 7309.785: 59.0637% ( 96) 00:30:39.454 7309.785 - 7360.197: 59.6229% ( 97) 00:30:39.454 7360.197 - 7410.609: 60.2283% ( 105) 00:30:39.454 7410.609 - 7461.022: 60.8971% ( 116) 00:30:39.454 7461.022 - 7511.434: 61.5948% ( 121) 00:30:39.454 7511.434 - 7561.846: 62.3328% ( 128) 00:30:39.454 7561.846 - 7612.258: 63.0477% ( 124) 00:30:39.454 7612.258 - 7662.671: 63.8376% ( 137) 00:30:39.454 7662.671 - 7713.083: 64.6275% ( 137) 00:30:39.454 7713.083 - 7763.495: 65.4520% ( 143) 00:30:39.454 7763.495 - 7813.908: 66.3745% ( 160) 00:30:39.454 7813.908 - 7864.320: 67.3605% ( 171) 00:30:39.454 7864.320 - 7914.732: 68.4214% ( 184) 00:30:39.454 7914.732 - 7965.145: 69.5514% ( 196) 00:30:39.454 7965.145 - 8015.557: 70.6757% ( 195) 00:30:39.454 8015.557 - 8065.969: 71.7539% ( 187) 00:30:39.454 8065.969 - 8116.382: 72.9474% ( 207) 00:30:39.454 8116.382 - 8166.794: 74.0256% ( 187) 00:30:39.454 8166.794 - 8217.206: 75.1268% ( 191) 00:30:39.454 8217.206 - 8267.618: 76.3030% ( 204) 00:30:39.454 8267.618 - 8318.031: 77.5542% ( 217) 00:30:39.454 8318.031 - 8368.443: 78.6324% ( 187) 00:30:39.454 8368.443 - 8418.855: 79.8259% ( 207) 00:30:39.454 8418.855 - 8469.268: 80.8983% ( 186) 00:30:39.454 8469.268 - 8519.680: 81.9707% ( 186) 00:30:39.454 8519.680 - 8570.092: 83.0547% ( 188) 00:30:39.454 8570.092 - 8620.505: 84.1963% ( 198) 00:30:39.454 8620.505 - 8670.917: 85.1303% ( 162) 00:30:39.454 8670.917 - 8721.329: 86.0874% ( 166) 00:30:39.454 8721.329 - 8771.742: 86.9407% ( 148) 00:30:39.454 8771.742 - 8822.154: 87.7422% ( 139) 00:30:39.454 8822.154 - 8872.566: 88.4571% ( 124) 00:30:39.454 8872.566 - 8922.978: 89.2009% ( 129) 00:30:39.454 8922.978 - 8973.391: 89.8639% ( 115) 00:30:39.454 8973.391 - 9023.803: 90.4117% ( 95) 00:30:39.454 9023.803 - 9074.215: 90.9018% ( 85) 00:30:39.454 9074.215 - 9124.628: 91.4207% ( 90) 00:30:39.454 9124.628 - 9175.040: 91.9165% ( 86) 00:30:39.454 9175.040 - 9225.452: 92.3432% ( 74) 00:30:39.454 9225.452 - 9275.865: 92.7641% ( 73) 00:30:39.454 9275.865 - 9326.277: 93.1273% ( 63) 00:30:39.454 9326.277 - 9376.689: 93.4387% ( 54) 00:30:39.454 9376.689 - 9427.102: 93.7961% ( 62) 00:30:39.454 9427.102 - 9477.514: 94.1363% ( 59) 00:30:39.454 9477.514 - 9527.926: 94.4534% ( 55) 00:30:39.454 9527.926 - 9578.338: 94.6898% ( 41) 00:30:39.454 9578.338 - 9628.751: 94.8974% ( 36) 00:30:39.454 9628.751 - 9679.163: 95.0992% ( 35) 00:30:39.454 9679.163 - 9729.575: 95.3067% ( 36) 00:30:39.454 9729.575 - 9779.988: 95.4797% ( 30) 00:30:39.454 9779.988 - 9830.400: 95.6642% ( 32) 00:30:39.454 9830.400 - 9880.812: 95.8487% ( 32) 00:30:39.454 9880.812 - 9931.225: 96.0044% ( 27) 00:30:39.454 9931.225 - 9981.637: 96.1543% ( 26) 00:30:39.454 9981.637 - 10032.049: 96.2984% ( 25) 00:30:39.454 10032.049 - 10082.462: 96.4195% ( 21) 00:30:39.454 10082.462 - 10132.874: 96.5406% ( 21) 00:30:39.454 10132.874 - 10183.286: 96.6559% ( 20) 00:30:39.454 10183.286 - 10233.698: 96.7597% ( 18) 00:30:39.454 10233.698 - 10284.111: 96.8462% ( 15) 00:30:39.454 10284.111 - 10334.523: 96.9327% ( 15) 00:30:39.454 10334.523 - 10384.935: 97.0307% ( 17) 00:30:39.454 10384.935 - 10435.348: 97.1345% ( 18) 00:30:39.454 10435.348 - 10485.760: 97.2555% ( 21) 00:30:39.454 10485.760 - 10536.172: 97.3478% ( 16) 00:30:39.454 10536.172 - 10586.585: 97.4458% ( 17) 00:30:39.454 10586.585 - 10636.997: 97.5554% ( 19) 00:30:39.454 10636.997 - 10687.409: 97.6303% ( 13) 00:30:39.454 10687.409 - 10737.822: 97.7110% ( 14) 00:30:39.454 10737.822 - 10788.234: 97.7802% ( 12) 00:30:39.454 10788.234 - 10838.646: 97.8552% ( 13) 00:30:39.454 10838.646 - 10889.058: 97.9417% ( 15) 00:30:39.454 10889.058 - 10939.471: 98.0108% ( 12) 00:30:39.454 10939.471 - 10989.883: 98.0973% ( 15) 00:30:39.454 10989.883 - 11040.295: 98.1780% ( 14) 00:30:39.454 11040.295 - 11090.708: 98.2588% ( 14) 00:30:39.454 11090.708 - 11141.120: 98.3395% ( 14) 00:30:39.454 11141.120 - 11191.532: 98.4202% ( 14) 00:30:39.454 11191.532 - 11241.945: 98.4836% ( 11) 00:30:39.454 11241.945 - 11292.357: 98.5586% ( 13) 00:30:39.454 11292.357 - 11342.769: 98.6566% ( 17) 00:30:39.454 11342.769 - 11393.182: 98.7488% ( 16) 00:30:39.454 11393.182 - 11443.594: 98.8065% ( 10) 00:30:39.454 11443.594 - 11494.006: 98.8642% ( 10) 00:30:39.454 11494.006 - 11544.418: 98.9161% ( 9) 00:30:39.455 11544.418 - 11594.831: 98.9622% ( 8) 00:30:39.455 11594.831 - 11645.243: 99.0083% ( 8) 00:30:39.455 11645.243 - 11695.655: 99.0544% ( 8) 00:30:39.455 11695.655 - 11746.068: 99.0833% ( 5) 00:30:39.455 11746.068 - 11796.480: 99.1179% ( 6) 00:30:39.455 11796.480 - 11846.892: 99.1524% ( 6) 00:30:39.455 11846.892 - 11897.305: 99.1755% ( 4) 00:30:39.455 11897.305 - 11947.717: 99.2159% ( 7) 00:30:39.455 11947.717 - 11998.129: 99.2505% ( 6) 00:30:39.455 11998.129 - 12048.542: 99.2851% ( 6) 00:30:39.455 12048.542 - 12098.954: 99.3081% ( 4) 00:30:39.455 12098.954 - 12149.366: 99.3196% ( 2) 00:30:39.455 12149.366 - 12199.778: 99.3369% ( 3) 00:30:39.455 12199.778 - 12250.191: 99.3600% ( 4) 00:30:39.455 12250.191 - 12300.603: 99.3773% ( 3) 00:30:39.455 12300.603 - 12351.015: 99.4004% ( 4) 00:30:39.455 12351.015 - 12401.428: 99.4177% ( 3) 00:30:39.455 12401.428 - 12451.840: 99.4407% ( 4) 00:30:39.455 12451.840 - 12502.252: 99.4638% ( 4) 00:30:39.455 12502.252 - 12552.665: 99.4869% ( 4) 00:30:39.455 12552.665 - 12603.077: 99.5157% ( 5) 00:30:39.455 12603.077 - 12653.489: 99.5387% ( 4) 00:30:39.455 12653.489 - 12703.902: 99.5560% ( 3) 00:30:39.455 12703.902 - 12754.314: 99.5676% ( 2) 00:30:39.455 12754.314 - 12804.726: 99.5791% ( 2) 00:30:39.455 12804.726 - 12855.138: 99.5906% ( 2) 00:30:39.455 12855.138 - 12905.551: 99.5964% ( 1) 00:30:39.455 12905.551 - 13006.375: 99.6195% ( 4) 00:30:39.455 13006.375 - 13107.200: 99.6310% ( 2) 00:30:39.455 16938.535 - 17039.360: 99.6368% ( 1) 00:30:39.455 17039.360 - 17140.185: 99.6598% ( 4) 00:30:39.455 17140.185 - 17241.009: 99.6829% ( 4) 00:30:39.455 17241.009 - 17341.834: 99.7002% ( 3) 00:30:39.455 17341.834 - 17442.658: 99.7232% ( 4) 00:30:39.455 17442.658 - 17543.483: 99.7405% ( 3) 00:30:39.455 17543.483 - 17644.308: 99.7578% ( 3) 00:30:39.455 17644.308 - 17745.132: 99.7809% ( 4) 00:30:39.455 17745.132 - 17845.957: 99.7982% ( 3) 00:30:39.455 17845.957 - 17946.782: 99.8213% ( 4) 00:30:39.455 17946.782 - 18047.606: 99.8443% ( 4) 00:30:39.455 18047.606 - 18148.431: 99.8616% ( 3) 00:30:39.455 18148.431 - 18249.255: 99.8847% ( 4) 00:30:39.455 18249.255 - 18350.080: 99.9020% ( 3) 00:30:39.455 18350.080 - 18450.905: 99.9250% ( 4) 00:30:39.455 18450.905 - 18551.729: 99.9423% ( 3) 00:30:39.455 18551.729 - 18652.554: 99.9654% ( 4) 00:30:39.455 18652.554 - 18753.378: 99.9827% ( 3) 00:30:39.455 18753.378 - 18854.203: 100.0000% ( 3) 00:30:39.455 00:30:39.455 23:12:19 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:40.832 Initializing NVMe Controllers 00:30:40.832 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:40.832 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:40.832 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:40.832 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:40.832 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:40.832 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:40.832 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:40.832 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:40.832 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:40.832 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:40.832 Initialization complete. Launching workers. 00:30:40.832 ======================================================== 00:30:40.832 Latency(us) 00:30:40.832 Device Information : IOPS MiB/s Average min max 00:30:40.832 PCIE (0000:00:10.0) NSID 1 from core 0: 14900.01 174.61 8601.55 6173.78 31770.44 00:30:40.832 PCIE (0000:00:11.0) NSID 1 from core 0: 14900.01 174.61 8587.62 6308.93 29874.04 00:30:40.832 PCIE (0000:00:13.0) NSID 1 from core 0: 14900.01 174.61 8574.07 6210.62 28639.15 00:30:40.832 PCIE (0000:00:12.0) NSID 1 from core 0: 14900.01 174.61 8560.32 6289.78 26704.96 00:30:40.832 PCIE (0000:00:12.0) NSID 2 from core 0: 14900.01 174.61 8546.77 6234.93 25149.01 00:30:40.832 PCIE (0000:00:12.0) NSID 3 from core 0: 14900.01 174.61 8533.05 6294.73 23192.99 00:30:40.832 ======================================================== 00:30:40.832 Total : 89400.03 1047.66 8567.23 6173.78 31770.44 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6553.600us 00:30:40.832 10.00000% : 7007.311us 00:30:40.832 25.00000% : 7511.434us 00:30:40.832 50.00000% : 8267.618us 00:30:40.832 75.00000% : 9225.452us 00:30:40.832 90.00000% : 10183.286us 00:30:40.832 95.00000% : 10889.058us 00:30:40.832 98.00000% : 11897.305us 00:30:40.832 99.00000% : 14417.920us 00:30:40.832 99.50000% : 25105.329us 00:30:40.832 99.90000% : 31457.280us 00:30:40.832 99.99000% : 31860.578us 00:30:40.832 99.99900% : 31860.578us 00:30:40.832 99.99990% : 31860.578us 00:30:40.832 99.99999% : 31860.578us 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6604.012us 00:30:40.832 10.00000% : 7007.311us 00:30:40.832 25.00000% : 7511.434us 00:30:40.832 50.00000% : 8267.618us 00:30:40.832 75.00000% : 9225.452us 00:30:40.832 90.00000% : 10132.874us 00:30:40.832 95.00000% : 10989.883us 00:30:40.832 98.00000% : 12300.603us 00:30:40.832 99.00000% : 14216.271us 00:30:40.832 99.50000% : 24197.908us 00:30:40.832 99.90000% : 29642.437us 00:30:40.832 99.99000% : 29844.086us 00:30:40.832 99.99900% : 30045.735us 00:30:40.832 99.99990% : 30045.735us 00:30:40.832 99.99999% : 30045.735us 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6604.012us 00:30:40.832 10.00000% : 7057.723us 00:30:40.832 25.00000% : 7461.022us 00:30:40.832 50.00000% : 8318.031us 00:30:40.832 75.00000% : 9275.865us 00:30:40.832 90.00000% : 10032.049us 00:30:40.832 95.00000% : 10788.234us 00:30:40.832 98.00000% : 12250.191us 00:30:40.832 99.00000% : 14014.622us 00:30:40.832 99.50000% : 23088.837us 00:30:40.832 99.90000% : 28432.542us 00:30:40.832 99.99000% : 28634.191us 00:30:40.832 99.99900% : 28835.840us 00:30:40.832 99.99990% : 28835.840us 00:30:40.832 99.99999% : 28835.840us 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6654.425us 00:30:40.832 10.00000% : 7057.723us 00:30:40.832 25.00000% : 7511.434us 00:30:40.832 50.00000% : 8318.031us 00:30:40.832 75.00000% : 9275.865us 00:30:40.832 90.00000% : 9981.637us 00:30:40.832 95.00000% : 10737.822us 00:30:40.832 98.00000% : 12703.902us 00:30:40.832 99.00000% : 13308.849us 00:30:40.832 99.50000% : 21878.942us 00:30:40.832 99.90000% : 26416.049us 00:30:40.832 99.99000% : 26819.348us 00:30:40.832 99.99900% : 26819.348us 00:30:40.832 99.99990% : 26819.348us 00:30:40.832 99.99999% : 26819.348us 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6654.425us 00:30:40.832 10.00000% : 7057.723us 00:30:40.832 25.00000% : 7561.846us 00:30:40.832 50.00000% : 8318.031us 00:30:40.832 75.00000% : 9275.865us 00:30:40.832 90.00000% : 9981.637us 00:30:40.832 95.00000% : 10687.409us 00:30:40.832 98.00000% : 12300.603us 00:30:40.832 99.00000% : 13510.498us 00:30:40.832 99.50000% : 19559.975us 00:30:40.832 99.90000% : 24802.855us 00:30:40.832 99.99000% : 25206.154us 00:30:40.832 99.99900% : 25206.154us 00:30:40.832 99.99990% : 25206.154us 00:30:40.832 99.99999% : 25206.154us 00:30:40.832 00:30:40.832 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:40.832 ================================================================================= 00:30:40.832 1.00000% : 6654.425us 00:30:40.832 10.00000% : 7057.723us 00:30:40.832 25.00000% : 7561.846us 00:30:40.832 50.00000% : 8267.618us 00:30:40.832 75.00000% : 9275.865us 00:30:40.832 90.00000% : 10032.049us 00:30:40.832 95.00000% : 10889.058us 00:30:40.832 98.00000% : 11846.892us 00:30:40.832 99.00000% : 14115.446us 00:30:40.832 99.50000% : 18551.729us 00:30:40.832 99.90000% : 22887.188us 00:30:40.832 99.99000% : 23189.662us 00:30:40.832 99.99900% : 23290.486us 00:30:40.832 99.99990% : 23290.486us 00:30:40.832 99.99999% : 23290.486us 00:30:40.832 00:30:40.832 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:40.832 ============================================================================== 00:30:40.832 Range in us Cumulative IO count 00:30:40.833 6150.302 - 6175.508: 0.0067% ( 1) 00:30:40.833 6175.508 - 6200.714: 0.0134% ( 1) 00:30:40.833 6200.714 - 6225.920: 0.0268% ( 2) 00:30:40.833 6225.920 - 6251.126: 0.0469% ( 3) 00:30:40.833 6251.126 - 6276.332: 0.0805% ( 5) 00:30:40.833 6276.332 - 6301.538: 0.1006% ( 3) 00:30:40.833 6301.538 - 6326.745: 0.1274% ( 4) 00:30:40.833 6326.745 - 6351.951: 0.1677% ( 6) 00:30:40.833 6351.951 - 6377.157: 0.1945% ( 4) 00:30:40.833 6377.157 - 6402.363: 0.2548% ( 9) 00:30:40.833 6402.363 - 6427.569: 0.3889% ( 20) 00:30:40.833 6427.569 - 6452.775: 0.4962% ( 16) 00:30:40.833 6452.775 - 6503.188: 0.7846% ( 43) 00:30:40.833 6503.188 - 6553.600: 1.2004% ( 62) 00:30:40.833 6553.600 - 6604.012: 1.7771% ( 86) 00:30:40.833 6604.012 - 6654.425: 2.7226% ( 141) 00:30:40.833 6654.425 - 6704.837: 3.7017% ( 146) 00:30:40.833 6704.837 - 6755.249: 4.6473% ( 141) 00:30:40.833 6755.249 - 6805.662: 5.7336% ( 162) 00:30:40.833 6805.662 - 6856.074: 7.1151% ( 206) 00:30:40.833 6856.074 - 6906.486: 8.3423% ( 183) 00:30:40.833 6906.486 - 6956.898: 9.6499% ( 195) 00:30:40.833 6956.898 - 7007.311: 10.9643% ( 196) 00:30:40.833 7007.311 - 7057.723: 12.5134% ( 231) 00:30:40.833 7057.723 - 7108.135: 14.0491% ( 229) 00:30:40.833 7108.135 - 7158.548: 15.3903% ( 200) 00:30:40.833 7158.548 - 7208.960: 16.7449% ( 202) 00:30:40.833 7208.960 - 7259.372: 18.2135% ( 219) 00:30:40.833 7259.372 - 7309.785: 19.6218% ( 210) 00:30:40.833 7309.785 - 7360.197: 20.9496% ( 198) 00:30:40.833 7360.197 - 7410.609: 22.4048% ( 217) 00:30:40.833 7410.609 - 7461.022: 23.9270% ( 227) 00:30:40.833 7461.022 - 7511.434: 25.2079% ( 191) 00:30:40.833 7511.434 - 7561.846: 26.4753% ( 189) 00:30:40.833 7561.846 - 7612.258: 27.7629% ( 192) 00:30:40.833 7612.258 - 7662.671: 29.1644% ( 209) 00:30:40.833 7662.671 - 7713.083: 30.9348% ( 264) 00:30:40.833 7713.083 - 7763.495: 33.0137% ( 310) 00:30:40.833 7763.495 - 7813.908: 34.9785% ( 293) 00:30:40.833 7813.908 - 7864.320: 36.9702% ( 297) 00:30:40.833 7864.320 - 7914.732: 38.6803% ( 255) 00:30:40.833 7914.732 - 7965.145: 40.5781% ( 283) 00:30:40.833 7965.145 - 8015.557: 42.2546% ( 250) 00:30:40.833 8015.557 - 8065.969: 43.9914% ( 259) 00:30:40.833 8065.969 - 8116.382: 45.7014% ( 255) 00:30:40.833 8116.382 - 8166.794: 47.3176% ( 241) 00:30:40.833 8166.794 - 8217.206: 48.7929% ( 220) 00:30:40.833 8217.206 - 8267.618: 50.2280% ( 214) 00:30:40.833 8267.618 - 8318.031: 51.5156% ( 192) 00:30:40.833 8318.031 - 8368.443: 53.0378% ( 227) 00:30:40.833 8368.443 - 8418.855: 54.3991% ( 203) 00:30:40.833 8418.855 - 8469.268: 56.0555% ( 247) 00:30:40.833 8469.268 - 8519.680: 57.5778% ( 227) 00:30:40.833 8519.680 - 8570.092: 58.9458% ( 204) 00:30:40.833 8570.092 - 8620.505: 60.6022% ( 247) 00:30:40.833 8620.505 - 8670.917: 61.8093% ( 180) 00:30:40.833 8670.917 - 8721.329: 62.8286% ( 152) 00:30:40.833 8721.329 - 8771.742: 64.0424% ( 181) 00:30:40.833 8771.742 - 8822.154: 65.3098% ( 189) 00:30:40.833 8822.154 - 8872.566: 66.5236% ( 181) 00:30:40.833 8872.566 - 8922.978: 68.0928% ( 234) 00:30:40.833 8922.978 - 8973.391: 69.5145% ( 212) 00:30:40.833 8973.391 - 9023.803: 70.5740% ( 158) 00:30:40.833 9023.803 - 9074.215: 71.7543% ( 176) 00:30:40.833 9074.215 - 9124.628: 73.1827% ( 213) 00:30:40.833 9124.628 - 9175.040: 74.6848% ( 224) 00:30:40.833 9175.040 - 9225.452: 76.1534% ( 219) 00:30:40.833 9225.452 - 9275.865: 77.2733% ( 167) 00:30:40.833 9275.865 - 9326.277: 78.2524% ( 146) 00:30:40.833 9326.277 - 9376.689: 79.2717% ( 152) 00:30:40.833 9376.689 - 9427.102: 80.3648% ( 163) 00:30:40.833 9427.102 - 9477.514: 81.5249% ( 173) 00:30:40.833 9477.514 - 9527.926: 82.4034% ( 131) 00:30:40.833 9527.926 - 9578.338: 83.2417% ( 125) 00:30:40.833 9578.338 - 9628.751: 83.9995% ( 113) 00:30:40.833 9628.751 - 9679.163: 84.7371% ( 110) 00:30:40.833 9679.163 - 9729.575: 85.5284% ( 118) 00:30:40.833 9729.575 - 9779.988: 86.2862% ( 113) 00:30:40.833 9779.988 - 9830.400: 87.0172% ( 109) 00:30:40.833 9830.400 - 9880.812: 87.6341% ( 92) 00:30:40.833 9880.812 - 9931.225: 88.0566% ( 63) 00:30:40.833 9931.225 - 9981.637: 88.4187% ( 54) 00:30:40.833 9981.637 - 10032.049: 88.8546% ( 65) 00:30:40.833 10032.049 - 10082.462: 89.2972% ( 66) 00:30:40.833 10082.462 - 10132.874: 89.7331% ( 65) 00:30:40.833 10132.874 - 10183.286: 90.1757% ( 66) 00:30:40.833 10183.286 - 10233.698: 90.5982% ( 63) 00:30:40.833 10233.698 - 10284.111: 90.9268% ( 49) 00:30:40.833 10284.111 - 10334.523: 91.3560% ( 64) 00:30:40.833 10334.523 - 10384.935: 91.6309% ( 41) 00:30:40.833 10384.935 - 10435.348: 91.9997% ( 55) 00:30:40.833 10435.348 - 10485.760: 92.3887% ( 58) 00:30:40.833 10485.760 - 10536.172: 92.9185% ( 79) 00:30:40.833 10536.172 - 10586.585: 93.3476% ( 64) 00:30:40.833 10586.585 - 10636.997: 93.6896% ( 51) 00:30:40.833 10636.997 - 10687.409: 93.9914% ( 45) 00:30:40.833 10687.409 - 10737.822: 94.2597% ( 40) 00:30:40.833 10737.822 - 10788.234: 94.5011% ( 36) 00:30:40.833 10788.234 - 10838.646: 94.7157% ( 32) 00:30:40.833 10838.646 - 10889.058: 95.0778% ( 54) 00:30:40.833 10889.058 - 10939.471: 95.2857% ( 31) 00:30:40.833 10939.471 - 10989.883: 95.5405% ( 38) 00:30:40.833 10989.883 - 11040.295: 95.7216% ( 27) 00:30:40.833 11040.295 - 11090.708: 95.9630% ( 36) 00:30:40.833 11090.708 - 11141.120: 96.1508% ( 28) 00:30:40.833 11141.120 - 11191.532: 96.3318% ( 27) 00:30:40.833 11191.532 - 11241.945: 96.5464% ( 32) 00:30:40.833 11241.945 - 11292.357: 96.6872% ( 21) 00:30:40.833 11292.357 - 11342.769: 96.8549% ( 25) 00:30:40.833 11342.769 - 11393.182: 97.0359% ( 27) 00:30:40.833 11393.182 - 11443.594: 97.2036% ( 25) 00:30:40.833 11443.594 - 11494.006: 97.3444% ( 21) 00:30:40.833 11494.006 - 11544.418: 97.4651% ( 18) 00:30:40.833 11544.418 - 11594.831: 97.5456% ( 12) 00:30:40.833 11594.831 - 11645.243: 97.6663% ( 18) 00:30:40.833 11645.243 - 11695.655: 97.7535% ( 13) 00:30:40.833 11695.655 - 11746.068: 97.8340% ( 12) 00:30:40.833 11746.068 - 11796.480: 97.9144% ( 12) 00:30:40.833 11796.480 - 11846.892: 97.9748% ( 9) 00:30:40.833 11846.892 - 11897.305: 98.0620% ( 13) 00:30:40.833 11897.305 - 11947.717: 98.1223% ( 9) 00:30:40.833 11947.717 - 11998.129: 98.1626% ( 6) 00:30:40.833 12098.954 - 12149.366: 98.1760% ( 2) 00:30:40.833 12149.366 - 12199.778: 98.1894% ( 2) 00:30:40.833 12199.778 - 12250.191: 98.2028% ( 2) 00:30:40.833 12250.191 - 12300.603: 98.2095% ( 1) 00:30:40.833 12351.015 - 12401.428: 98.2162% ( 1) 00:30:40.833 12401.428 - 12451.840: 98.2363% ( 3) 00:30:40.833 12451.840 - 12502.252: 98.2430% ( 1) 00:30:40.833 12502.252 - 12552.665: 98.2631% ( 3) 00:30:40.833 12552.665 - 12603.077: 98.2698% ( 1) 00:30:40.833 12603.077 - 12653.489: 98.2833% ( 2) 00:30:40.833 13409.674 - 13510.498: 98.3034% ( 3) 00:30:40.833 13510.498 - 13611.323: 98.3101% ( 1) 00:30:40.833 13611.323 - 13712.148: 98.3503% ( 6) 00:30:40.833 13712.148 - 13812.972: 98.3906% ( 6) 00:30:40.833 13812.972 - 13913.797: 98.4643% ( 11) 00:30:40.833 13913.797 - 14014.622: 98.6521% ( 28) 00:30:40.833 14014.622 - 14115.446: 98.7594% ( 16) 00:30:40.833 14115.446 - 14216.271: 98.9002% ( 21) 00:30:40.833 14216.271 - 14317.095: 98.9673% ( 10) 00:30:40.833 14317.095 - 14417.920: 99.0276% ( 9) 00:30:40.833 14417.920 - 14518.745: 99.0746% ( 7) 00:30:40.833 14518.745 - 14619.569: 99.1215% ( 7) 00:30:40.833 14619.569 - 14720.394: 99.1416% ( 3) 00:30:40.833 23290.486 - 23391.311: 99.1483% ( 1) 00:30:40.833 23391.311 - 23492.135: 99.1752% ( 4) 00:30:40.833 23492.135 - 23592.960: 99.1886% ( 2) 00:30:40.833 23592.960 - 23693.785: 99.2154% ( 4) 00:30:40.833 23693.785 - 23794.609: 99.2288% ( 2) 00:30:40.833 23794.609 - 23895.434: 99.2489% ( 3) 00:30:40.833 23895.434 - 23996.258: 99.2758% ( 4) 00:30:40.833 23996.258 - 24097.083: 99.2959% ( 3) 00:30:40.833 24097.083 - 24197.908: 99.3160% ( 3) 00:30:40.833 24197.908 - 24298.732: 99.3428% ( 4) 00:30:40.833 24298.732 - 24399.557: 99.3898% ( 7) 00:30:40.833 24399.557 - 24500.382: 99.4166% ( 4) 00:30:40.833 24500.382 - 24601.206: 99.4367% ( 3) 00:30:40.833 24601.206 - 24702.031: 99.4501% ( 2) 00:30:40.833 24702.031 - 24802.855: 99.4702% ( 3) 00:30:40.833 24802.855 - 24903.680: 99.4836% ( 2) 00:30:40.833 24903.680 - 25004.505: 99.4970% ( 2) 00:30:40.833 25004.505 - 25105.329: 99.5172% ( 3) 00:30:40.833 25105.329 - 25206.154: 99.5306% ( 2) 00:30:40.833 25306.978 - 25407.803: 99.5440% ( 2) 00:30:40.833 25407.803 - 25508.628: 99.5574% ( 2) 00:30:40.833 25508.628 - 25609.452: 99.5708% ( 2) 00:30:40.833 29642.437 - 29844.086: 99.5775% ( 1) 00:30:40.833 29844.086 - 30045.735: 99.6178% ( 6) 00:30:40.833 30045.735 - 30247.385: 99.6647% ( 7) 00:30:40.833 30247.385 - 30449.034: 99.6982% ( 5) 00:30:40.833 30449.034 - 30650.683: 99.7452% ( 7) 00:30:40.833 30650.683 - 30852.332: 99.7854% ( 6) 00:30:40.833 30852.332 - 31053.982: 99.8323% ( 7) 00:30:40.833 31053.982 - 31255.631: 99.8793% ( 7) 00:30:40.833 31255.631 - 31457.280: 99.9262% ( 7) 00:30:40.833 31457.280 - 31658.929: 99.9799% ( 8) 00:30:40.833 31658.929 - 31860.578: 100.0000% ( 3) 00:30:40.833 00:30:40.833 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:40.833 ============================================================================== 00:30:40.833 Range in us Cumulative IO count 00:30:40.833 6301.538 - 6326.745: 0.0201% ( 3) 00:30:40.833 6351.951 - 6377.157: 0.0268% ( 1) 00:30:40.833 6402.363 - 6427.569: 0.0402% ( 2) 00:30:40.833 6427.569 - 6452.775: 0.0536% ( 2) 00:30:40.833 6452.775 - 6503.188: 0.1542% ( 15) 00:30:40.833 6503.188 - 6553.600: 0.4158% ( 39) 00:30:40.833 6553.600 - 6604.012: 1.0193% ( 90) 00:30:40.833 6604.012 - 6654.425: 1.3814% ( 54) 00:30:40.833 6654.425 - 6704.837: 2.2599% ( 131) 00:30:40.833 6704.837 - 6755.249: 3.2524% ( 148) 00:30:40.833 6755.249 - 6805.662: 4.8283% ( 235) 00:30:40.833 6805.662 - 6856.074: 6.4981% ( 249) 00:30:40.833 6856.074 - 6906.486: 8.5233% ( 302) 00:30:40.833 6906.486 - 6956.898: 9.9920% ( 219) 00:30:40.833 6956.898 - 7007.311: 11.4941% ( 224) 00:30:40.834 7007.311 - 7057.723: 12.9761% ( 221) 00:30:40.834 7057.723 - 7108.135: 14.3844% ( 210) 00:30:40.834 7108.135 - 7158.548: 15.7927% ( 210) 00:30:40.834 7158.548 - 7208.960: 17.2411% ( 216) 00:30:40.834 7208.960 - 7259.372: 18.5958% ( 202) 00:30:40.834 7259.372 - 7309.785: 19.9370% ( 200) 00:30:40.834 7309.785 - 7360.197: 21.3251% ( 207) 00:30:40.834 7360.197 - 7410.609: 22.7267% ( 209) 00:30:40.834 7410.609 - 7461.022: 24.1483% ( 212) 00:30:40.834 7461.022 - 7511.434: 25.5834% ( 214) 00:30:40.834 7511.434 - 7561.846: 27.0118% ( 213) 00:30:40.834 7561.846 - 7612.258: 28.2725% ( 188) 00:30:40.834 7612.258 - 7662.671: 29.2516% ( 146) 00:30:40.834 7662.671 - 7713.083: 30.8141% ( 233) 00:30:40.834 7713.083 - 7763.495: 32.4303% ( 241) 00:30:40.834 7763.495 - 7813.908: 34.8176% ( 356) 00:30:40.834 7813.908 - 7864.320: 36.5075% ( 252) 00:30:40.834 7864.320 - 7914.732: 38.6534% ( 320) 00:30:40.834 7914.732 - 7965.145: 41.1145% ( 367) 00:30:40.834 7965.145 - 8015.557: 43.1062% ( 297) 00:30:40.834 8015.557 - 8065.969: 44.4608% ( 202) 00:30:40.834 8065.969 - 8116.382: 45.7283% ( 189) 00:30:40.834 8116.382 - 8166.794: 47.8943% ( 323) 00:30:40.834 8166.794 - 8217.206: 49.2355% ( 200) 00:30:40.834 8217.206 - 8267.618: 50.2749% ( 155) 00:30:40.834 8267.618 - 8318.031: 51.5156% ( 185) 00:30:40.834 8318.031 - 8368.443: 53.2457% ( 258) 00:30:40.834 8368.443 - 8418.855: 54.9692% ( 257) 00:30:40.834 8418.855 - 8469.268: 56.3104% ( 200) 00:30:40.834 8469.268 - 8519.680: 57.9332% ( 242) 00:30:40.834 8519.680 - 8570.092: 59.7304% ( 268) 00:30:40.834 8570.092 - 8620.505: 61.0783% ( 201) 00:30:40.834 8620.505 - 8670.917: 62.3256% ( 186) 00:30:40.834 8670.917 - 8721.329: 63.5059% ( 176) 00:30:40.834 8721.329 - 8771.742: 64.4514% ( 141) 00:30:40.834 8771.742 - 8822.154: 65.6921% ( 185) 00:30:40.834 8822.154 - 8872.566: 66.9729% ( 191) 00:30:40.834 8872.566 - 8922.978: 68.1867% ( 181) 00:30:40.834 8922.978 - 8973.391: 69.7358% ( 231) 00:30:40.834 8973.391 - 9023.803: 71.1776% ( 215) 00:30:40.834 9023.803 - 9074.215: 72.1499% ( 145) 00:30:40.834 9074.215 - 9124.628: 73.3168% ( 174) 00:30:40.834 9124.628 - 9175.040: 74.3830% ( 159) 00:30:40.834 9175.040 - 9225.452: 75.6505% ( 189) 00:30:40.834 9225.452 - 9275.865: 76.7905% ( 170) 00:30:40.834 9275.865 - 9326.277: 77.8970% ( 165) 00:30:40.834 9326.277 - 9376.689: 79.1108% ( 181) 00:30:40.834 9376.689 - 9427.102: 80.3045% ( 178) 00:30:40.834 9427.102 - 9477.514: 81.5048% ( 179) 00:30:40.834 9477.514 - 9527.926: 82.5107% ( 150) 00:30:40.834 9527.926 - 9578.338: 83.3758% ( 129) 00:30:40.834 9578.338 - 9628.751: 84.4152% ( 155) 00:30:40.834 9628.751 - 9679.163: 85.5083% ( 163) 00:30:40.834 9679.163 - 9729.575: 86.4472% ( 140) 00:30:40.834 9729.575 - 9779.988: 87.2318% ( 117) 00:30:40.834 9779.988 - 9830.400: 87.8286% ( 89) 00:30:40.834 9830.400 - 9880.812: 88.2980% ( 70) 00:30:40.834 9880.812 - 9931.225: 88.6937% ( 59) 00:30:40.834 9931.225 - 9981.637: 89.0357% ( 51) 00:30:40.834 9981.637 - 10032.049: 89.4045% ( 55) 00:30:40.834 10032.049 - 10082.462: 89.9879% ( 87) 00:30:40.834 10082.462 - 10132.874: 90.4842% ( 74) 00:30:40.834 10132.874 - 10183.286: 91.0207% ( 80) 00:30:40.834 10183.286 - 10233.698: 91.5035% ( 72) 00:30:40.834 10233.698 - 10284.111: 91.9595% ( 68) 00:30:40.834 10284.111 - 10334.523: 92.3954% ( 65) 00:30:40.834 10334.523 - 10384.935: 92.6905% ( 44) 00:30:40.834 10384.935 - 10435.348: 93.0459% ( 53) 00:30:40.834 10435.348 - 10485.760: 93.2806% ( 35) 00:30:40.834 10485.760 - 10536.172: 93.4549% ( 26) 00:30:40.834 10536.172 - 10586.585: 93.6159% ( 24) 00:30:40.834 10586.585 - 10636.997: 93.8104% ( 29) 00:30:40.834 10636.997 - 10687.409: 94.0585% ( 37) 00:30:40.834 10687.409 - 10737.822: 94.1792% ( 18) 00:30:40.834 10737.822 - 10788.234: 94.3334% ( 23) 00:30:40.834 10788.234 - 10838.646: 94.4877% ( 23) 00:30:40.834 10838.646 - 10889.058: 94.6888% ( 30) 00:30:40.834 10889.058 - 10939.471: 94.8766% ( 28) 00:30:40.834 10939.471 - 10989.883: 95.0308% ( 23) 00:30:40.834 10989.883 - 11040.295: 95.2119% ( 27) 00:30:40.834 11040.295 - 11090.708: 95.4936% ( 42) 00:30:40.834 11090.708 - 11141.120: 95.6612% ( 25) 00:30:40.834 11141.120 - 11191.532: 95.8087% ( 22) 00:30:40.834 11191.532 - 11241.945: 96.0300% ( 33) 00:30:40.834 11241.945 - 11292.357: 96.1239% ( 14) 00:30:40.834 11292.357 - 11342.769: 96.2178% ( 14) 00:30:40.834 11342.769 - 11393.182: 96.3184% ( 15) 00:30:40.834 11393.182 - 11443.594: 96.5196% ( 30) 00:30:40.834 11443.594 - 11494.006: 96.7342% ( 32) 00:30:40.834 11494.006 - 11544.418: 96.8146% ( 12) 00:30:40.834 11544.418 - 11594.831: 96.8750% ( 9) 00:30:40.834 11594.831 - 11645.243: 96.9555% ( 12) 00:30:40.834 11645.243 - 11695.655: 97.0359% ( 12) 00:30:40.834 11695.655 - 11746.068: 97.1298% ( 14) 00:30:40.834 11746.068 - 11796.480: 97.2304% ( 15) 00:30:40.834 11796.480 - 11846.892: 97.3511% ( 18) 00:30:40.834 11846.892 - 11897.305: 97.5255% ( 26) 00:30:40.834 11897.305 - 11947.717: 97.6931% ( 25) 00:30:40.834 11947.717 - 11998.129: 97.7468% ( 8) 00:30:40.834 11998.129 - 12048.542: 97.8407% ( 14) 00:30:40.834 12048.542 - 12098.954: 97.9144% ( 11) 00:30:40.834 12098.954 - 12149.366: 97.9345% ( 3) 00:30:40.834 12149.366 - 12199.778: 97.9681% ( 5) 00:30:40.834 12199.778 - 12250.191: 97.9748% ( 1) 00:30:40.834 12250.191 - 12300.603: 98.0016% ( 4) 00:30:40.834 12300.603 - 12351.015: 98.0351% ( 5) 00:30:40.834 12351.015 - 12401.428: 98.2229% ( 28) 00:30:40.834 12401.428 - 12451.840: 98.2363% ( 2) 00:30:40.834 12451.840 - 12502.252: 98.2430% ( 1) 00:30:40.834 12502.252 - 12552.665: 98.2564% ( 2) 00:30:40.834 12552.665 - 12603.077: 98.2631% ( 1) 00:30:40.834 12603.077 - 12653.489: 98.2833% ( 3) 00:30:40.834 13812.972 - 13913.797: 98.3704% ( 13) 00:30:40.834 13913.797 - 14014.622: 98.6521% ( 42) 00:30:40.834 14014.622 - 14115.446: 98.9203% ( 40) 00:30:40.834 14115.446 - 14216.271: 99.0276% ( 16) 00:30:40.834 14216.271 - 14317.095: 99.1282% ( 15) 00:30:40.834 14317.095 - 14417.920: 99.1416% ( 2) 00:30:40.834 22383.065 - 22483.889: 99.1550% ( 2) 00:30:40.834 22483.889 - 22584.714: 99.1752% ( 3) 00:30:40.834 22584.714 - 22685.538: 99.1953% ( 3) 00:30:40.834 22685.538 - 22786.363: 99.2221% ( 4) 00:30:40.834 22786.363 - 22887.188: 99.2422% ( 3) 00:30:40.834 22887.188 - 22988.012: 99.2556% ( 2) 00:30:40.834 22988.012 - 23088.837: 99.2758% ( 3) 00:30:40.834 23088.837 - 23189.662: 99.3026% ( 4) 00:30:40.834 23189.662 - 23290.486: 99.3227% ( 3) 00:30:40.834 23290.486 - 23391.311: 99.3428% ( 3) 00:30:40.834 23391.311 - 23492.135: 99.3629% ( 3) 00:30:40.834 23492.135 - 23592.960: 99.3830% ( 3) 00:30:40.834 23592.960 - 23693.785: 99.4032% ( 3) 00:30:40.834 23693.785 - 23794.609: 99.4233% ( 3) 00:30:40.834 23794.609 - 23895.434: 99.4434% ( 3) 00:30:40.834 23895.434 - 23996.258: 99.4702% ( 4) 00:30:40.834 23996.258 - 24097.083: 99.4903% ( 3) 00:30:40.834 24097.083 - 24197.908: 99.5105% ( 3) 00:30:40.834 24197.908 - 24298.732: 99.5306% ( 3) 00:30:40.834 24298.732 - 24399.557: 99.5507% ( 3) 00:30:40.834 24399.557 - 24500.382: 99.5708% ( 3) 00:30:40.834 28029.243 - 28230.892: 99.5909% ( 3) 00:30:40.834 28230.892 - 28432.542: 99.6446% ( 8) 00:30:40.834 28432.542 - 28634.191: 99.6915% ( 7) 00:30:40.834 28634.191 - 28835.840: 99.7385% ( 7) 00:30:40.834 28835.840 - 29037.489: 99.7921% ( 8) 00:30:40.834 29037.489 - 29239.138: 99.8391% ( 7) 00:30:40.834 29239.138 - 29440.788: 99.8927% ( 8) 00:30:40.834 29440.788 - 29642.437: 99.9464% ( 8) 00:30:40.834 29642.437 - 29844.086: 99.9933% ( 7) 00:30:40.834 29844.086 - 30045.735: 100.0000% ( 1) 00:30:40.834 00:30:40.834 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:40.834 ============================================================================== 00:30:40.834 Range in us Cumulative IO count 00:30:40.834 6200.714 - 6225.920: 0.0134% ( 2) 00:30:40.834 6251.126 - 6276.332: 0.0201% ( 1) 00:30:40.834 6301.538 - 6326.745: 0.0335% ( 2) 00:30:40.834 6326.745 - 6351.951: 0.0402% ( 1) 00:30:40.834 6377.157 - 6402.363: 0.0604% ( 3) 00:30:40.834 6402.363 - 6427.569: 0.0939% ( 5) 00:30:40.834 6427.569 - 6452.775: 0.1207% ( 4) 00:30:40.834 6452.775 - 6503.188: 0.2817% ( 24) 00:30:40.834 6503.188 - 6553.600: 0.6035% ( 48) 00:30:40.834 6553.600 - 6604.012: 1.0461% ( 66) 00:30:40.834 6604.012 - 6654.425: 1.7503% ( 105) 00:30:40.834 6654.425 - 6704.837: 2.3940% ( 96) 00:30:40.834 6704.837 - 6755.249: 3.1854% ( 118) 00:30:40.834 6755.249 - 6805.662: 4.1778% ( 148) 00:30:40.834 6805.662 - 6856.074: 5.7470% ( 234) 00:30:40.834 6856.074 - 6906.486: 7.6717% ( 287) 00:30:40.834 6906.486 - 6956.898: 8.6373% ( 144) 00:30:40.834 6956.898 - 7007.311: 9.8511% ( 181) 00:30:40.834 7007.311 - 7057.723: 11.0917% ( 185) 00:30:40.834 7057.723 - 7108.135: 12.7079% ( 241) 00:30:40.834 7108.135 - 7158.548: 14.3911% ( 251) 00:30:40.834 7158.548 - 7208.960: 16.1816% ( 267) 00:30:40.834 7208.960 - 7259.372: 18.2672% ( 311) 00:30:40.834 7259.372 - 7309.785: 20.1717% ( 284) 00:30:40.834 7309.785 - 7360.197: 21.6001% ( 213) 00:30:40.834 7360.197 - 7410.609: 23.1760% ( 235) 00:30:40.834 7410.609 - 7461.022: 25.0000% ( 272) 00:30:40.834 7461.022 - 7511.434: 26.3144% ( 196) 00:30:40.834 7511.434 - 7561.846: 27.8031% ( 222) 00:30:40.834 7561.846 - 7612.258: 29.1041% ( 194) 00:30:40.834 7612.258 - 7662.671: 30.3179% ( 181) 00:30:40.834 7662.671 - 7713.083: 31.3908% ( 160) 00:30:40.834 7713.083 - 7763.495: 32.8796% ( 222) 00:30:40.834 7763.495 - 7813.908: 34.3415% ( 218) 00:30:40.834 7813.908 - 7864.320: 36.1186% ( 265) 00:30:40.834 7864.320 - 7914.732: 37.5805% ( 218) 00:30:40.834 7914.732 - 7965.145: 39.2503% ( 249) 00:30:40.834 7965.145 - 8015.557: 40.9335% ( 251) 00:30:40.834 8015.557 - 8065.969: 42.1875% ( 187) 00:30:40.834 8065.969 - 8116.382: 44.2328% ( 305) 00:30:40.834 8116.382 - 8166.794: 45.9563% ( 257) 00:30:40.835 8166.794 - 8217.206: 47.6194% ( 248) 00:30:40.835 8217.206 - 8267.618: 49.3428% ( 257) 00:30:40.835 8267.618 - 8318.031: 51.3211% ( 295) 00:30:40.835 8318.031 - 8368.443: 52.9238% ( 239) 00:30:40.835 8368.443 - 8418.855: 55.1636% ( 334) 00:30:40.835 8418.855 - 8469.268: 57.0681% ( 284) 00:30:40.835 8469.268 - 8519.680: 58.6172% ( 231) 00:30:40.835 8519.680 - 8570.092: 59.9987% ( 206) 00:30:40.835 8570.092 - 8620.505: 61.0314% ( 154) 00:30:40.835 8620.505 - 8670.917: 62.1178% ( 162) 00:30:40.835 8670.917 - 8721.329: 63.2511% ( 169) 00:30:40.835 8721.329 - 8771.742: 64.4984% ( 186) 00:30:40.835 8771.742 - 8822.154: 65.9335% ( 214) 00:30:40.835 8822.154 - 8872.566: 67.2277% ( 193) 00:30:40.835 8872.566 - 8922.978: 68.5354% ( 195) 00:30:40.835 8922.978 - 8973.391: 69.8565% ( 197) 00:30:40.835 8973.391 - 9023.803: 70.6880% ( 124) 00:30:40.835 9023.803 - 9074.215: 71.4726% ( 117) 00:30:40.835 9074.215 - 9124.628: 72.4718% ( 149) 00:30:40.835 9124.628 - 9175.040: 73.4174% ( 141) 00:30:40.835 9175.040 - 9225.452: 74.5440% ( 168) 00:30:40.835 9225.452 - 9275.865: 75.7444% ( 179) 00:30:40.835 9275.865 - 9326.277: 77.1526% ( 210) 00:30:40.835 9326.277 - 9376.689: 78.6682% ( 226) 00:30:40.835 9376.689 - 9427.102: 79.8216% ( 172) 00:30:40.835 9427.102 - 9477.514: 81.0958% ( 190) 00:30:40.835 9477.514 - 9527.926: 82.3431% ( 186) 00:30:40.835 9527.926 - 9578.338: 83.2618% ( 137) 00:30:40.835 9578.338 - 9628.751: 84.4085% ( 171) 00:30:40.835 9628.751 - 9679.163: 85.7095% ( 194) 00:30:40.835 9679.163 - 9729.575: 86.6483% ( 140) 00:30:40.835 9729.575 - 9779.988: 87.4531% ( 120) 00:30:40.835 9779.988 - 9830.400: 88.0767% ( 93) 00:30:40.835 9830.400 - 9880.812: 88.7607% ( 102) 00:30:40.835 9880.812 - 9931.225: 89.2637% ( 75) 00:30:40.835 9931.225 - 9981.637: 89.6795% ( 62) 00:30:40.835 9981.637 - 10032.049: 90.1757% ( 74) 00:30:40.835 10032.049 - 10082.462: 90.6585% ( 72) 00:30:40.835 10082.462 - 10132.874: 91.0810% ( 63) 00:30:40.835 10132.874 - 10183.286: 91.4834% ( 60) 00:30:40.835 10183.286 - 10233.698: 91.8924% ( 61) 00:30:40.835 10233.698 - 10284.111: 92.4222% ( 79) 00:30:40.835 10284.111 - 10334.523: 92.8782% ( 68) 00:30:40.835 10334.523 - 10384.935: 93.1465% ( 40) 00:30:40.835 10384.935 - 10435.348: 93.4281% ( 42) 00:30:40.835 10435.348 - 10485.760: 93.7098% ( 42) 00:30:40.835 10485.760 - 10536.172: 93.8774% ( 25) 00:30:40.835 10536.172 - 10586.585: 94.0920% ( 32) 00:30:40.835 10586.585 - 10636.997: 94.3938% ( 45) 00:30:40.835 10636.997 - 10687.409: 94.6352% ( 36) 00:30:40.835 10687.409 - 10737.822: 94.8364% ( 30) 00:30:40.835 10737.822 - 10788.234: 95.1784% ( 51) 00:30:40.835 10788.234 - 10838.646: 95.3460% ( 25) 00:30:40.835 10838.646 - 10889.058: 95.5271% ( 27) 00:30:40.835 10889.058 - 10939.471: 95.8289% ( 45) 00:30:40.835 10939.471 - 10989.883: 95.9496% ( 18) 00:30:40.835 10989.883 - 11040.295: 96.0435% ( 14) 00:30:40.835 11040.295 - 11090.708: 96.1977% ( 23) 00:30:40.835 11090.708 - 11141.120: 96.3720% ( 26) 00:30:40.835 11141.120 - 11191.532: 96.4659% ( 14) 00:30:40.835 11191.532 - 11241.945: 96.5732% ( 16) 00:30:40.835 11241.945 - 11292.357: 96.6939% ( 18) 00:30:40.835 11292.357 - 11342.769: 96.8146% ( 18) 00:30:40.835 11342.769 - 11393.182: 97.1164% ( 45) 00:30:40.835 11393.182 - 11443.594: 97.2505% ( 20) 00:30:40.835 11443.594 - 11494.006: 97.3042% ( 8) 00:30:40.835 11494.006 - 11544.418: 97.3511% ( 7) 00:30:40.835 11544.418 - 11594.831: 97.3981% ( 7) 00:30:40.835 11594.831 - 11645.243: 97.4450% ( 7) 00:30:40.835 11645.243 - 11695.655: 97.4920% ( 7) 00:30:40.835 11695.655 - 11746.068: 97.5389% ( 7) 00:30:40.835 11746.068 - 11796.480: 97.5925% ( 8) 00:30:40.835 11796.480 - 11846.892: 97.6730% ( 12) 00:30:40.835 11846.892 - 11897.305: 97.6998% ( 4) 00:30:40.835 11897.305 - 11947.717: 97.7669% ( 10) 00:30:40.835 11947.717 - 11998.129: 97.8273% ( 9) 00:30:40.835 11998.129 - 12048.542: 97.8876% ( 9) 00:30:40.835 12048.542 - 12098.954: 97.9278% ( 6) 00:30:40.835 12098.954 - 12149.366: 97.9748% ( 7) 00:30:40.835 12149.366 - 12199.778: 97.9949% ( 3) 00:30:40.835 12199.778 - 12250.191: 98.0150% ( 3) 00:30:40.835 12250.191 - 12300.603: 98.0351% ( 3) 00:30:40.835 12300.603 - 12351.015: 98.0754% ( 6) 00:30:40.835 12351.015 - 12401.428: 98.1156% ( 6) 00:30:40.835 12401.428 - 12451.840: 98.1626% ( 7) 00:30:40.835 12451.840 - 12502.252: 98.2095% ( 7) 00:30:40.835 12502.252 - 12552.665: 98.2631% ( 8) 00:30:40.835 12552.665 - 12603.077: 98.2833% ( 3) 00:30:40.835 13006.375 - 13107.200: 98.2900% ( 1) 00:30:40.835 13208.025 - 13308.849: 98.3168% ( 4) 00:30:40.835 13308.849 - 13409.674: 98.4777% ( 24) 00:30:40.835 13409.674 - 13510.498: 98.6119% ( 20) 00:30:40.835 13510.498 - 13611.323: 98.8332% ( 33) 00:30:40.835 13611.323 - 13712.148: 98.8734% ( 6) 00:30:40.835 13712.148 - 13812.972: 98.9203% ( 7) 00:30:40.835 13812.972 - 13913.797: 98.9606% ( 6) 00:30:40.835 13913.797 - 14014.622: 99.0075% ( 7) 00:30:40.835 14014.622 - 14115.446: 99.0477% ( 6) 00:30:40.835 14115.446 - 14216.271: 99.0880% ( 6) 00:30:40.835 14216.271 - 14317.095: 99.1215% ( 5) 00:30:40.835 14317.095 - 14417.920: 99.1416% ( 3) 00:30:40.835 21273.994 - 21374.818: 99.1550% ( 2) 00:30:40.835 21374.818 - 21475.643: 99.1752% ( 3) 00:30:40.835 21475.643 - 21576.468: 99.1953% ( 3) 00:30:40.835 21576.468 - 21677.292: 99.2154% ( 3) 00:30:40.835 21677.292 - 21778.117: 99.2355% ( 3) 00:30:40.835 21778.117 - 21878.942: 99.2556% ( 3) 00:30:40.835 21878.942 - 21979.766: 99.2758% ( 3) 00:30:40.835 21979.766 - 22080.591: 99.2959% ( 3) 00:30:40.835 22080.591 - 22181.415: 99.3160% ( 3) 00:30:40.835 22181.415 - 22282.240: 99.3361% ( 3) 00:30:40.835 22282.240 - 22383.065: 99.3562% ( 3) 00:30:40.835 22383.065 - 22483.889: 99.3763% ( 3) 00:30:40.835 22483.889 - 22584.714: 99.3965% ( 3) 00:30:40.835 22584.714 - 22685.538: 99.4166% ( 3) 00:30:40.835 22685.538 - 22786.363: 99.4434% ( 4) 00:30:40.835 22786.363 - 22887.188: 99.4702% ( 4) 00:30:40.835 22887.188 - 22988.012: 99.4970% ( 4) 00:30:40.835 22988.012 - 23088.837: 99.5239% ( 4) 00:30:40.835 23088.837 - 23189.662: 99.5507% ( 4) 00:30:40.835 23189.662 - 23290.486: 99.5708% ( 3) 00:30:40.835 26819.348 - 27020.997: 99.5775% ( 1) 00:30:40.835 27020.997 - 27222.646: 99.6312% ( 8) 00:30:40.835 27222.646 - 27424.295: 99.6781% ( 7) 00:30:40.835 27424.295 - 27625.945: 99.7318% ( 8) 00:30:40.835 27625.945 - 27827.594: 99.7854% ( 8) 00:30:40.835 27827.594 - 28029.243: 99.8391% ( 8) 00:30:40.835 28029.243 - 28230.892: 99.8860% ( 7) 00:30:40.835 28230.892 - 28432.542: 99.9396% ( 8) 00:30:40.835 28432.542 - 28634.191: 99.9933% ( 8) 00:30:40.835 28634.191 - 28835.840: 100.0000% ( 1) 00:30:40.835 00:30:40.835 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:40.835 ============================================================================== 00:30:40.835 Range in us Cumulative IO count 00:30:40.835 6276.332 - 6301.538: 0.0067% ( 1) 00:30:40.835 6326.745 - 6351.951: 0.0134% ( 1) 00:30:40.835 6377.157 - 6402.363: 0.0268% ( 2) 00:30:40.835 6402.363 - 6427.569: 0.0402% ( 2) 00:30:40.835 6427.569 - 6452.775: 0.0671% ( 4) 00:30:40.835 6452.775 - 6503.188: 0.1408% ( 11) 00:30:40.835 6503.188 - 6553.600: 0.3152% ( 26) 00:30:40.835 6553.600 - 6604.012: 0.8047% ( 73) 00:30:40.835 6604.012 - 6654.425: 1.4686% ( 99) 00:30:40.835 6654.425 - 6704.837: 2.4410% ( 145) 00:30:40.835 6704.837 - 6755.249: 3.4201% ( 146) 00:30:40.835 6755.249 - 6805.662: 4.7210% ( 194) 00:30:40.835 6805.662 - 6856.074: 5.7269% ( 150) 00:30:40.835 6856.074 - 6906.486: 6.9340% ( 180) 00:30:40.835 6906.486 - 6956.898: 8.4429% ( 225) 00:30:40.835 6956.898 - 7007.311: 9.8310% ( 207) 00:30:40.835 7007.311 - 7057.723: 10.9442% ( 166) 00:30:40.835 7057.723 - 7108.135: 12.5268% ( 236) 00:30:40.835 7108.135 - 7158.548: 14.0826% ( 232) 00:30:40.835 7158.548 - 7208.960: 15.5781% ( 223) 00:30:40.835 7208.960 - 7259.372: 17.5027% ( 287) 00:30:40.835 7259.372 - 7309.785: 19.7157% ( 330) 00:30:40.835 7309.785 - 7360.197: 21.6336% ( 286) 00:30:40.835 7360.197 - 7410.609: 23.3503% ( 256) 00:30:40.835 7410.609 - 7461.022: 24.6781% ( 198) 00:30:40.835 7461.022 - 7511.434: 25.8315% ( 172) 00:30:40.835 7511.434 - 7561.846: 26.8978% ( 159) 00:30:40.835 7561.846 - 7612.258: 28.1116% ( 181) 00:30:40.835 7612.258 - 7662.671: 29.2047% ( 163) 00:30:40.835 7662.671 - 7713.083: 30.4855% ( 191) 00:30:40.835 7713.083 - 7763.495: 32.0547% ( 234) 00:30:40.835 7763.495 - 7813.908: 33.5233% ( 219) 00:30:40.835 7813.908 - 7864.320: 34.9852% ( 218) 00:30:40.835 7864.320 - 7914.732: 36.5612% ( 235) 00:30:40.835 7914.732 - 7965.145: 38.8077% ( 335) 00:30:40.835 7965.145 - 8015.557: 41.2956% ( 371) 00:30:40.835 8015.557 - 8065.969: 43.0861% ( 267) 00:30:40.835 8065.969 - 8116.382: 44.6955% ( 240) 00:30:40.835 8116.382 - 8166.794: 46.2044% ( 225) 00:30:40.835 8166.794 - 8217.206: 47.5992% ( 208) 00:30:40.835 8217.206 - 8267.618: 48.8801% ( 191) 00:30:40.835 8267.618 - 8318.031: 50.9523% ( 309) 00:30:40.835 8318.031 - 8368.443: 52.6086% ( 247) 00:30:40.835 8368.443 - 8418.855: 54.7479% ( 319) 00:30:40.835 8418.855 - 8469.268: 56.7328% ( 296) 00:30:40.835 8469.268 - 8519.680: 58.1277% ( 208) 00:30:40.835 8519.680 - 8570.092: 59.8780% ( 261) 00:30:40.835 8570.092 - 8620.505: 61.1521% ( 190) 00:30:40.835 8620.505 - 8670.917: 62.3525% ( 179) 00:30:40.835 8670.917 - 8721.329: 63.8613% ( 225) 00:30:40.835 8721.329 - 8771.742: 64.9879% ( 168) 00:30:40.835 8771.742 - 8822.154: 65.9603% ( 145) 00:30:40.835 8822.154 - 8872.566: 66.8656% ( 135) 00:30:40.835 8872.566 - 8922.978: 67.7039% ( 125) 00:30:40.835 8922.978 - 8973.391: 68.6628% ( 143) 00:30:40.835 8973.391 - 9023.803: 69.8900% ( 183) 00:30:40.835 9023.803 - 9074.215: 71.2849% ( 208) 00:30:40.835 9074.215 - 9124.628: 72.4584% ( 175) 00:30:40.835 9124.628 - 9175.040: 73.7393% ( 191) 00:30:40.835 9175.040 - 9225.452: 74.8726% ( 169) 00:30:40.835 9225.452 - 9275.865: 76.0663% ( 178) 00:30:40.836 9275.865 - 9326.277: 77.0386% ( 145) 00:30:40.836 9326.277 - 9376.689: 78.2994% ( 188) 00:30:40.836 9376.689 - 9427.102: 79.4729% ( 175) 00:30:40.836 9427.102 - 9477.514: 81.0421% ( 234) 00:30:40.836 9477.514 - 9527.926: 82.1754% ( 169) 00:30:40.836 9527.926 - 9578.338: 83.1746% ( 149) 00:30:40.836 9578.338 - 9628.751: 84.2409% ( 159) 00:30:40.836 9628.751 - 9679.163: 84.9920% ( 112) 00:30:40.836 9679.163 - 9729.575: 85.9107% ( 137) 00:30:40.836 9729.575 - 9779.988: 86.8294% ( 137) 00:30:40.836 9779.988 - 9830.400: 87.7951% ( 144) 00:30:40.836 9830.400 - 9880.812: 89.0357% ( 185) 00:30:40.836 9880.812 - 9931.225: 89.7733% ( 110) 00:30:40.836 9931.225 - 9981.637: 90.3299% ( 83) 00:30:40.836 9981.637 - 10032.049: 91.0005% ( 100) 00:30:40.836 10032.049 - 10082.462: 91.4633% ( 69) 00:30:40.836 10082.462 - 10132.874: 91.9058% ( 66) 00:30:40.836 10132.874 - 10183.286: 92.2613% ( 53) 00:30:40.836 10183.286 - 10233.698: 92.6033% ( 51) 00:30:40.836 10233.698 - 10284.111: 92.9117% ( 46) 00:30:40.836 10284.111 - 10334.523: 93.2605% ( 52) 00:30:40.836 10334.523 - 10384.935: 93.5220% ( 39) 00:30:40.836 10384.935 - 10435.348: 93.7634% ( 36) 00:30:40.836 10435.348 - 10485.760: 93.9847% ( 33) 00:30:40.836 10485.760 - 10536.172: 94.2798% ( 44) 00:30:40.836 10536.172 - 10586.585: 94.6687% ( 58) 00:30:40.836 10586.585 - 10636.997: 94.8431% ( 26) 00:30:40.836 10636.997 - 10687.409: 94.9973% ( 23) 00:30:40.836 10687.409 - 10737.822: 95.1717% ( 26) 00:30:40.836 10737.822 - 10788.234: 95.5338% ( 54) 00:30:40.836 10788.234 - 10838.646: 95.6679% ( 20) 00:30:40.836 10838.646 - 10889.058: 95.8691% ( 30) 00:30:40.836 10889.058 - 10939.471: 96.0166% ( 22) 00:30:40.836 10939.471 - 10989.883: 96.2916% ( 41) 00:30:40.836 10989.883 - 11040.295: 96.3989% ( 16) 00:30:40.836 11040.295 - 11090.708: 96.4659% ( 10) 00:30:40.836 11090.708 - 11141.120: 96.5464% ( 12) 00:30:40.836 11141.120 - 11191.532: 96.6068% ( 9) 00:30:40.836 11191.532 - 11241.945: 96.7275% ( 18) 00:30:40.836 11241.945 - 11292.357: 96.8616% ( 20) 00:30:40.836 11292.357 - 11342.769: 96.9555% ( 14) 00:30:40.836 11342.769 - 11393.182: 97.1298% ( 26) 00:30:40.836 11393.182 - 11443.594: 97.2774% ( 22) 00:30:40.836 11443.594 - 11494.006: 97.3243% ( 7) 00:30:40.836 11494.006 - 11544.418: 97.3645% ( 6) 00:30:40.836 11544.418 - 11594.831: 97.3981% ( 5) 00:30:40.836 11594.831 - 11645.243: 97.4182% ( 3) 00:30:40.836 11645.243 - 11695.655: 97.4249% ( 1) 00:30:40.836 11695.655 - 11746.068: 97.4316% ( 1) 00:30:40.836 11897.305 - 11947.717: 97.4450% ( 2) 00:30:40.836 11947.717 - 11998.129: 97.4651% ( 3) 00:30:40.836 11998.129 - 12048.542: 97.4852% ( 3) 00:30:40.836 12048.542 - 12098.954: 97.4920% ( 1) 00:30:40.836 12098.954 - 12149.366: 97.5054% ( 2) 00:30:40.836 12149.366 - 12199.778: 97.5188% ( 2) 00:30:40.836 12199.778 - 12250.191: 97.5322% ( 2) 00:30:40.836 12250.191 - 12300.603: 97.5523% ( 3) 00:30:40.836 12300.603 - 12351.015: 97.5791% ( 4) 00:30:40.836 12351.015 - 12401.428: 97.5992% ( 3) 00:30:40.836 12401.428 - 12451.840: 97.6261% ( 4) 00:30:40.836 12451.840 - 12502.252: 97.6395% ( 2) 00:30:40.836 12502.252 - 12552.665: 97.6931% ( 8) 00:30:40.836 12552.665 - 12603.077: 97.7803% ( 13) 00:30:40.836 12603.077 - 12653.489: 97.9010% ( 18) 00:30:40.836 12653.489 - 12703.902: 98.0284% ( 19) 00:30:40.836 12703.902 - 12754.314: 98.1491% ( 18) 00:30:40.836 12754.314 - 12804.726: 98.2430% ( 14) 00:30:40.836 12804.726 - 12855.138: 98.4174% ( 26) 00:30:40.836 12855.138 - 12905.551: 98.6722% ( 38) 00:30:40.836 12905.551 - 13006.375: 98.8667% ( 29) 00:30:40.836 13006.375 - 13107.200: 98.9472% ( 12) 00:30:40.836 13107.200 - 13208.025: 98.9941% ( 7) 00:30:40.836 13208.025 - 13308.849: 99.0477% ( 8) 00:30:40.836 13308.849 - 13409.674: 99.1014% ( 8) 00:30:40.836 13409.674 - 13510.498: 99.1349% ( 5) 00:30:40.836 13510.498 - 13611.323: 99.1416% ( 1) 00:30:40.836 20064.098 - 20164.923: 99.1483% ( 1) 00:30:40.836 20164.923 - 20265.748: 99.1550% ( 1) 00:30:40.836 20467.397 - 20568.222: 99.1617% ( 1) 00:30:40.836 20769.871 - 20870.695: 99.2020% ( 6) 00:30:40.836 20870.695 - 20971.520: 99.2758% ( 11) 00:30:40.836 20971.520 - 21072.345: 99.3361% ( 9) 00:30:40.836 21072.345 - 21173.169: 99.3562% ( 3) 00:30:40.836 21173.169 - 21273.994: 99.3763% ( 3) 00:30:40.836 21273.994 - 21374.818: 99.3965% ( 3) 00:30:40.836 21374.818 - 21475.643: 99.4166% ( 3) 00:30:40.836 21475.643 - 21576.468: 99.4434% ( 4) 00:30:40.836 21576.468 - 21677.292: 99.4568% ( 2) 00:30:40.836 21677.292 - 21778.117: 99.4836% ( 4) 00:30:40.836 21778.117 - 21878.942: 99.5105% ( 4) 00:30:40.836 21878.942 - 21979.766: 99.5306% ( 3) 00:30:40.836 21979.766 - 22080.591: 99.5507% ( 3) 00:30:40.836 22080.591 - 22181.415: 99.5708% ( 3) 00:30:40.836 24500.382 - 24601.206: 99.5909% ( 3) 00:30:40.836 24601.206 - 24702.031: 99.6781% ( 13) 00:30:40.836 24702.031 - 24802.855: 99.7720% ( 14) 00:30:40.836 24903.680 - 25004.505: 99.7787% ( 1) 00:30:40.836 25004.505 - 25105.329: 99.7854% ( 1) 00:30:40.836 25508.628 - 25609.452: 99.8122% ( 4) 00:30:40.836 25811.102 - 26012.751: 99.8525% ( 6) 00:30:40.836 26012.751 - 26214.400: 99.8927% ( 6) 00:30:40.836 26214.400 - 26416.049: 99.9329% ( 6) 00:30:40.836 26416.049 - 26617.698: 99.9799% ( 7) 00:30:40.836 26617.698 - 26819.348: 100.0000% ( 3) 00:30:40.836 00:30:40.836 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:40.836 ============================================================================== 00:30:40.836 Range in us Cumulative IO count 00:30:40.836 6225.920 - 6251.126: 0.0067% ( 1) 00:30:40.836 6276.332 - 6301.538: 0.0134% ( 1) 00:30:40.836 6301.538 - 6326.745: 0.0201% ( 1) 00:30:40.836 6326.745 - 6351.951: 0.0268% ( 1) 00:30:40.836 6402.363 - 6427.569: 0.0469% ( 3) 00:30:40.836 6427.569 - 6452.775: 0.0872% ( 6) 00:30:40.836 6452.775 - 6503.188: 0.1811% ( 14) 00:30:40.836 6503.188 - 6553.600: 0.4292% ( 37) 00:30:40.836 6553.600 - 6604.012: 0.8919% ( 69) 00:30:40.836 6604.012 - 6654.425: 1.3010% ( 61) 00:30:40.836 6654.425 - 6704.837: 2.0386% ( 110) 00:30:40.836 6704.837 - 6755.249: 3.1384% ( 164) 00:30:40.836 6755.249 - 6805.662: 4.1376% ( 149) 00:30:40.836 6805.662 - 6856.074: 5.5392% ( 209) 00:30:40.836 6856.074 - 6906.486: 6.8938% ( 202) 00:30:40.836 6906.486 - 6956.898: 8.4026% ( 225) 00:30:40.836 6956.898 - 7007.311: 9.7036% ( 194) 00:30:40.836 7007.311 - 7057.723: 10.8771% ( 175) 00:30:40.836 7057.723 - 7108.135: 12.3256% ( 216) 00:30:40.836 7108.135 - 7158.548: 13.4858% ( 173) 00:30:40.836 7158.548 - 7208.960: 15.0282% ( 230) 00:30:40.836 7208.960 - 7259.372: 16.6644% ( 244) 00:30:40.836 7259.372 - 7309.785: 18.3342% ( 249) 00:30:40.836 7309.785 - 7360.197: 20.1448% ( 270) 00:30:40.836 7360.197 - 7410.609: 21.3385% ( 178) 00:30:40.836 7410.609 - 7461.022: 23.2430% ( 284) 00:30:40.836 7461.022 - 7511.434: 24.8458% ( 239) 00:30:40.836 7511.434 - 7561.846: 26.3613% ( 226) 00:30:40.836 7561.846 - 7612.258: 27.7226% ( 203) 00:30:40.836 7612.258 - 7662.671: 29.2382% ( 226) 00:30:40.836 7662.671 - 7713.083: 30.5459% ( 195) 00:30:40.836 7713.083 - 7763.495: 31.9675% ( 212) 00:30:40.836 7763.495 - 7813.908: 33.7312% ( 263) 00:30:40.836 7813.908 - 7864.320: 35.6089% ( 280) 00:30:40.836 7864.320 - 7914.732: 37.4732% ( 278) 00:30:40.836 7914.732 - 7965.145: 39.3039% ( 273) 00:30:40.836 7965.145 - 8015.557: 41.6242% ( 346) 00:30:40.836 8015.557 - 8065.969: 43.2806% ( 247) 00:30:40.836 8065.969 - 8116.382: 44.8163% ( 229) 00:30:40.836 8116.382 - 8166.794: 46.3318% ( 226) 00:30:40.836 8166.794 - 8217.206: 47.9010% ( 234) 00:30:40.836 8217.206 - 8267.618: 49.7988% ( 283) 00:30:40.836 8267.618 - 8318.031: 51.5424% ( 260) 00:30:40.836 8318.031 - 8368.443: 53.2792% ( 259) 00:30:40.836 8368.443 - 8418.855: 54.7546% ( 220) 00:30:40.836 8418.855 - 8469.268: 56.6389% ( 281) 00:30:40.836 8469.268 - 8519.680: 59.0665% ( 362) 00:30:40.836 8519.680 - 8570.092: 60.6089% ( 230) 00:30:40.836 8570.092 - 8620.505: 61.9166% ( 195) 00:30:40.836 8620.505 - 8670.917: 62.9828% ( 159) 00:30:40.836 8670.917 - 8721.329: 64.2838% ( 194) 00:30:40.836 8721.329 - 8771.742: 65.2562% ( 145) 00:30:40.836 8771.742 - 8822.154: 66.1145% ( 128) 00:30:40.836 8822.154 - 8872.566: 67.2546% ( 170) 00:30:40.836 8872.566 - 8922.978: 67.8849% ( 94) 00:30:40.836 8922.978 - 8973.391: 68.5086% ( 93) 00:30:40.836 8973.391 - 9023.803: 69.3804% ( 130) 00:30:40.836 9023.803 - 9074.215: 70.1583% ( 116) 00:30:40.836 9074.215 - 9124.628: 71.2983% ( 170) 00:30:40.836 9124.628 - 9175.040: 72.6328% ( 199) 00:30:40.836 9175.040 - 9225.452: 74.4300% ( 268) 00:30:40.836 9225.452 - 9275.865: 76.0663% ( 244) 00:30:40.836 9275.865 - 9326.277: 77.3270% ( 188) 00:30:40.836 9326.277 - 9376.689: 78.6481% ( 197) 00:30:40.836 9376.689 - 9427.102: 80.1837% ( 229) 00:30:40.836 9427.102 - 9477.514: 81.4042% ( 182) 00:30:40.836 9477.514 - 9527.926: 82.5308% ( 168) 00:30:40.836 9527.926 - 9578.338: 83.3825% ( 127) 00:30:40.836 9578.338 - 9628.751: 84.0866% ( 105) 00:30:40.836 9628.751 - 9679.163: 84.8914% ( 120) 00:30:40.836 9679.163 - 9729.575: 85.8637% ( 145) 00:30:40.836 9729.575 - 9779.988: 86.9032% ( 155) 00:30:40.836 9779.988 - 9830.400: 87.9024% ( 149) 00:30:40.837 9830.400 - 9880.812: 88.7272% ( 123) 00:30:40.837 9880.812 - 9931.225: 89.5587% ( 124) 00:30:40.837 9931.225 - 9981.637: 90.0416% ( 72) 00:30:40.837 9981.637 - 10032.049: 90.5445% ( 75) 00:30:40.837 10032.049 - 10082.462: 91.0005% ( 68) 00:30:40.837 10082.462 - 10132.874: 91.4297% ( 64) 00:30:40.837 10132.874 - 10183.286: 91.7918% ( 54) 00:30:40.837 10183.286 - 10233.698: 92.2814% ( 73) 00:30:40.837 10233.698 - 10284.111: 92.6569% ( 56) 00:30:40.837 10284.111 - 10334.523: 93.0392% ( 57) 00:30:40.837 10334.523 - 10384.935: 93.4013% ( 54) 00:30:40.837 10384.935 - 10435.348: 93.6494% ( 37) 00:30:40.837 10435.348 - 10485.760: 93.8774% ( 34) 00:30:40.837 10485.760 - 10536.172: 94.1658% ( 43) 00:30:40.837 10536.172 - 10586.585: 94.4877% ( 48) 00:30:40.837 10586.585 - 10636.997: 94.7492% ( 39) 00:30:40.837 10636.997 - 10687.409: 95.1180% ( 55) 00:30:40.837 10687.409 - 10737.822: 95.3460% ( 34) 00:30:40.837 10737.822 - 10788.234: 95.5204% ( 26) 00:30:40.837 10788.234 - 10838.646: 95.7014% ( 27) 00:30:40.837 10838.646 - 10889.058: 95.9630% ( 39) 00:30:40.837 10889.058 - 10939.471: 96.2648% ( 45) 00:30:40.837 10939.471 - 10989.883: 96.3989% ( 20) 00:30:40.837 10989.883 - 11040.295: 96.4861% ( 13) 00:30:40.837 11040.295 - 11090.708: 96.5933% ( 16) 00:30:40.837 11090.708 - 11141.120: 96.6872% ( 14) 00:30:40.837 11141.120 - 11191.532: 96.8146% ( 19) 00:30:40.837 11191.532 - 11241.945: 97.1097% ( 44) 00:30:40.837 11241.945 - 11292.357: 97.1634% ( 8) 00:30:40.837 11292.357 - 11342.769: 97.2237% ( 9) 00:30:40.837 11342.769 - 11393.182: 97.2841% ( 9) 00:30:40.837 11393.182 - 11443.594: 97.3377% ( 8) 00:30:40.837 11443.594 - 11494.006: 97.3981% ( 9) 00:30:40.837 11494.006 - 11544.418: 97.4182% ( 3) 00:30:40.837 11544.418 - 11594.831: 97.4249% ( 1) 00:30:40.837 11695.655 - 11746.068: 97.4517% ( 4) 00:30:40.837 11746.068 - 11796.480: 97.4584% ( 1) 00:30:40.837 11796.480 - 11846.892: 97.5054% ( 7) 00:30:40.837 11846.892 - 11897.305: 97.5456% ( 6) 00:30:40.837 11897.305 - 11947.717: 97.5724% ( 4) 00:30:40.837 11947.717 - 11998.129: 97.6060% ( 5) 00:30:40.837 11998.129 - 12048.542: 97.6663% ( 9) 00:30:40.837 12048.542 - 12098.954: 97.7468% ( 12) 00:30:40.837 12098.954 - 12149.366: 97.8340% ( 13) 00:30:40.837 12149.366 - 12199.778: 97.9413% ( 16) 00:30:40.837 12199.778 - 12250.191: 97.9681% ( 4) 00:30:40.837 12250.191 - 12300.603: 98.0284% ( 9) 00:30:40.837 12300.603 - 12351.015: 98.0754% ( 7) 00:30:40.837 12351.015 - 12401.428: 98.1022% ( 4) 00:30:40.837 12401.428 - 12451.840: 98.1290% ( 4) 00:30:40.837 12451.840 - 12502.252: 98.1558% ( 4) 00:30:40.837 12502.252 - 12552.665: 98.1827% ( 4) 00:30:40.837 12552.665 - 12603.077: 98.2296% ( 7) 00:30:40.837 12603.077 - 12653.489: 98.2698% ( 6) 00:30:40.837 12653.489 - 12703.902: 98.3034% ( 5) 00:30:40.837 12703.902 - 12754.314: 98.3436% ( 6) 00:30:40.837 12754.314 - 12804.726: 98.3704% ( 4) 00:30:40.837 12804.726 - 12855.138: 98.4241% ( 8) 00:30:40.837 12855.138 - 12905.551: 98.4643% ( 6) 00:30:40.837 12905.551 - 13006.375: 98.5113% ( 7) 00:30:40.837 13006.375 - 13107.200: 98.6186% ( 16) 00:30:40.837 13107.200 - 13208.025: 98.7259% ( 16) 00:30:40.837 13208.025 - 13308.849: 98.8533% ( 19) 00:30:40.837 13308.849 - 13409.674: 98.9807% ( 19) 00:30:40.837 13409.674 - 13510.498: 99.0612% ( 12) 00:30:40.837 13510.498 - 13611.323: 99.1416% ( 12) 00:30:40.837 18450.905 - 18551.729: 99.2556% ( 17) 00:30:40.837 18551.729 - 18652.554: 99.3093% ( 8) 00:30:40.837 18652.554 - 18753.378: 99.3294% ( 3) 00:30:40.837 18753.378 - 18854.203: 99.3495% ( 3) 00:30:40.837 18854.203 - 18955.028: 99.3763% ( 4) 00:30:40.837 18955.028 - 19055.852: 99.3965% ( 3) 00:30:40.837 19055.852 - 19156.677: 99.4166% ( 3) 00:30:40.837 19156.677 - 19257.502: 99.4367% ( 3) 00:30:40.837 19257.502 - 19358.326: 99.4635% ( 4) 00:30:40.837 19358.326 - 19459.151: 99.4903% ( 4) 00:30:40.837 19459.151 - 19559.975: 99.5105% ( 3) 00:30:40.837 19559.975 - 19660.800: 99.5373% ( 4) 00:30:40.837 19660.800 - 19761.625: 99.5641% ( 4) 00:30:40.837 19761.625 - 19862.449: 99.5708% ( 1) 00:30:40.837 23391.311 - 23492.135: 99.5775% ( 1) 00:30:40.837 23492.135 - 23592.960: 99.5976% ( 3) 00:30:40.837 23592.960 - 23693.785: 99.6245% ( 4) 00:30:40.837 23693.785 - 23794.609: 99.6513% ( 4) 00:30:40.837 23794.609 - 23895.434: 99.6781% ( 4) 00:30:40.837 23895.434 - 23996.258: 99.6982% ( 3) 00:30:40.837 23996.258 - 24097.083: 99.7251% ( 4) 00:30:40.837 24097.083 - 24197.908: 99.7519% ( 4) 00:30:40.837 24197.908 - 24298.732: 99.7787% ( 4) 00:30:40.837 24298.732 - 24399.557: 99.8055% ( 4) 00:30:40.837 24399.557 - 24500.382: 99.8323% ( 4) 00:30:40.837 24500.382 - 24601.206: 99.8592% ( 4) 00:30:40.837 24601.206 - 24702.031: 99.8860% ( 4) 00:30:40.837 24702.031 - 24802.855: 99.9128% ( 4) 00:30:40.837 24802.855 - 24903.680: 99.9396% ( 4) 00:30:40.837 24903.680 - 25004.505: 99.9665% ( 4) 00:30:40.837 25004.505 - 25105.329: 99.9866% ( 3) 00:30:40.837 25105.329 - 25206.154: 100.0000% ( 2) 00:30:40.837 00:30:40.837 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:40.837 ============================================================================== 00:30:40.837 Range in us Cumulative IO count 00:30:40.837 6276.332 - 6301.538: 0.0067% ( 1) 00:30:40.837 6326.745 - 6351.951: 0.0134% ( 1) 00:30:40.837 6377.157 - 6402.363: 0.0335% ( 3) 00:30:40.837 6402.363 - 6427.569: 0.0738% ( 6) 00:30:40.837 6427.569 - 6452.775: 0.1140% ( 6) 00:30:40.837 6452.775 - 6503.188: 0.2414% ( 19) 00:30:40.837 6503.188 - 6553.600: 0.5365% ( 44) 00:30:40.837 6553.600 - 6604.012: 0.8718% ( 50) 00:30:40.837 6604.012 - 6654.425: 1.2004% ( 49) 00:30:40.837 6654.425 - 6704.837: 2.1727% ( 145) 00:30:40.837 6704.837 - 6755.249: 2.8299% ( 98) 00:30:40.837 6755.249 - 6805.662: 3.9901% ( 173) 00:30:40.837 6805.662 - 6856.074: 5.7068% ( 256) 00:30:40.837 6856.074 - 6906.486: 7.2693% ( 233) 00:30:40.837 6906.486 - 6956.898: 8.4295% ( 173) 00:30:40.837 6956.898 - 7007.311: 9.7304% ( 194) 00:30:40.837 7007.311 - 7057.723: 11.0850% ( 202) 00:30:40.837 7057.723 - 7108.135: 12.8018% ( 256) 00:30:40.837 7108.135 - 7158.548: 14.4380% ( 244) 00:30:40.837 7158.548 - 7208.960: 15.5915% ( 172) 00:30:40.837 7208.960 - 7259.372: 17.0064% ( 211) 00:30:40.837 7259.372 - 7309.785: 18.6092% ( 239) 00:30:40.837 7309.785 - 7360.197: 19.9236% ( 196) 00:30:40.837 7360.197 - 7410.609: 21.6269% ( 254) 00:30:40.837 7410.609 - 7461.022: 23.2162% ( 237) 00:30:40.837 7461.022 - 7511.434: 24.9665% ( 261) 00:30:40.837 7511.434 - 7561.846: 26.4351% ( 219) 00:30:40.837 7561.846 - 7612.258: 28.0445% ( 240) 00:30:40.837 7612.258 - 7662.671: 29.6137% ( 234) 00:30:40.837 7662.671 - 7713.083: 31.0421% ( 213) 00:30:40.837 7713.083 - 7763.495: 32.6113% ( 234) 00:30:40.837 7763.495 - 7813.908: 34.5024% ( 282) 00:30:40.837 7813.908 - 7864.320: 36.3130% ( 270) 00:30:40.837 7864.320 - 7914.732: 37.7012% ( 207) 00:30:40.837 7914.732 - 7965.145: 39.6191% ( 286) 00:30:40.837 7965.145 - 8015.557: 42.1741% ( 381) 00:30:40.837 8015.557 - 8065.969: 43.8238% ( 246) 00:30:40.837 8065.969 - 8116.382: 45.1516% ( 198) 00:30:40.837 8116.382 - 8166.794: 46.9622% ( 270) 00:30:40.837 8166.794 - 8217.206: 48.6186% ( 247) 00:30:40.837 8217.206 - 8267.618: 50.3688% ( 261) 00:30:40.837 8267.618 - 8318.031: 52.1325% ( 263) 00:30:40.837 8318.031 - 8368.443: 53.4871% ( 202) 00:30:40.837 8368.443 - 8418.855: 55.4587% ( 294) 00:30:40.837 8418.855 - 8469.268: 57.2827% ( 272) 00:30:40.837 8469.268 - 8519.680: 59.0665% ( 266) 00:30:40.837 8519.680 - 8570.092: 60.3943% ( 198) 00:30:40.837 8570.092 - 8620.505: 61.6014% ( 180) 00:30:40.837 8620.505 - 8670.917: 62.5939% ( 148) 00:30:40.837 8670.917 - 8721.329: 64.1497% ( 232) 00:30:40.837 8721.329 - 8771.742: 65.3501% ( 179) 00:30:40.837 8771.742 - 8822.154: 66.3560% ( 150) 00:30:40.837 8822.154 - 8872.566: 67.3552% ( 149) 00:30:40.837 8872.566 - 8922.978: 68.3342% ( 146) 00:30:40.837 8922.978 - 8973.391: 69.5078% ( 175) 00:30:40.837 8973.391 - 9023.803: 70.1985% ( 103) 00:30:40.837 9023.803 - 9074.215: 71.0233% ( 123) 00:30:40.837 9074.215 - 9124.628: 71.9555% ( 139) 00:30:40.837 9124.628 - 9175.040: 73.0284% ( 160) 00:30:40.837 9175.040 - 9225.452: 74.0545% ( 153) 00:30:40.837 9225.452 - 9275.865: 75.3420% ( 192) 00:30:40.837 9275.865 - 9326.277: 76.6497% ( 195) 00:30:40.837 9326.277 - 9376.689: 78.4536% ( 269) 00:30:40.837 9376.689 - 9427.102: 79.9222% ( 219) 00:30:40.837 9427.102 - 9477.514: 81.4512% ( 228) 00:30:40.837 9477.514 - 9527.926: 82.6180% ( 174) 00:30:40.837 9527.926 - 9578.338: 83.6910% ( 160) 00:30:40.837 9578.338 - 9628.751: 84.7774% ( 162) 00:30:40.837 9628.751 - 9679.163: 85.5888% ( 121) 00:30:40.837 9679.163 - 9729.575: 86.3197% ( 109) 00:30:40.837 9729.575 - 9779.988: 87.0574% ( 110) 00:30:40.837 9779.988 - 9830.400: 87.7548% ( 104) 00:30:40.837 9830.400 - 9880.812: 88.6132% ( 128) 00:30:40.837 9880.812 - 9931.225: 89.4313% ( 122) 00:30:40.837 9931.225 - 9981.637: 89.8203% ( 58) 00:30:40.837 9981.637 - 10032.049: 90.2897% ( 70) 00:30:40.837 10032.049 - 10082.462: 90.6384% ( 52) 00:30:40.837 10082.462 - 10132.874: 91.0743% ( 65) 00:30:40.838 10132.874 - 10183.286: 91.4700% ( 59) 00:30:40.838 10183.286 - 10233.698: 91.7047% ( 35) 00:30:40.838 10233.698 - 10284.111: 91.9260% ( 33) 00:30:40.838 10284.111 - 10334.523: 92.1875% ( 39) 00:30:40.838 10334.523 - 10384.935: 92.4557% ( 40) 00:30:40.838 10384.935 - 10435.348: 92.7173% ( 39) 00:30:40.838 10435.348 - 10485.760: 92.9922% ( 41) 00:30:40.838 10485.760 - 10536.172: 93.2202% ( 34) 00:30:40.838 10536.172 - 10586.585: 93.5019% ( 42) 00:30:40.838 10586.585 - 10636.997: 93.7701% ( 40) 00:30:40.838 10636.997 - 10687.409: 94.0384% ( 40) 00:30:40.838 10687.409 - 10737.822: 94.2798% ( 36) 00:30:40.838 10737.822 - 10788.234: 94.5212% ( 36) 00:30:40.838 10788.234 - 10838.646: 94.8766% ( 53) 00:30:40.838 10838.646 - 10889.058: 95.2387% ( 54) 00:30:40.838 10889.058 - 10939.471: 95.5003% ( 39) 00:30:40.838 10939.471 - 10989.883: 95.7551% ( 38) 00:30:40.838 10989.883 - 11040.295: 96.1373% ( 57) 00:30:40.838 11040.295 - 11090.708: 96.3855% ( 37) 00:30:40.838 11090.708 - 11141.120: 96.7744% ( 58) 00:30:40.838 11141.120 - 11191.532: 97.0963% ( 48) 00:30:40.838 11191.532 - 11241.945: 97.2371% ( 21) 00:30:40.838 11241.945 - 11292.357: 97.3377% ( 15) 00:30:40.838 11292.357 - 11342.769: 97.4450% ( 16) 00:30:40.838 11342.769 - 11393.182: 97.5523% ( 16) 00:30:40.838 11393.182 - 11443.594: 97.6663% ( 17) 00:30:40.838 11443.594 - 11494.006: 97.7267% ( 9) 00:30:40.838 11494.006 - 11544.418: 97.7602% ( 5) 00:30:40.838 11544.418 - 11594.831: 97.7937% ( 5) 00:30:40.838 11594.831 - 11645.243: 97.8407% ( 7) 00:30:40.838 11645.243 - 11695.655: 97.8809% ( 6) 00:30:40.838 11695.655 - 11746.068: 97.9345% ( 8) 00:30:40.838 11746.068 - 11796.480: 97.9748% ( 6) 00:30:40.838 11796.480 - 11846.892: 98.0284% ( 8) 00:30:40.838 11846.892 - 11897.305: 98.0955% ( 10) 00:30:40.838 11897.305 - 11947.717: 98.1357% ( 6) 00:30:40.838 11947.717 - 11998.129: 98.1558% ( 3) 00:30:40.838 11998.129 - 12048.542: 98.1827% ( 4) 00:30:40.838 12048.542 - 12098.954: 98.1894% ( 1) 00:30:40.838 12098.954 - 12149.366: 98.2028% ( 2) 00:30:40.838 12149.366 - 12199.778: 98.2162% ( 2) 00:30:40.838 12199.778 - 12250.191: 98.2296% ( 2) 00:30:40.838 12250.191 - 12300.603: 98.2430% ( 2) 00:30:40.838 12300.603 - 12351.015: 98.2564% ( 2) 00:30:40.838 12351.015 - 12401.428: 98.2698% ( 2) 00:30:40.838 12401.428 - 12451.840: 98.2833% ( 2) 00:30:40.838 13107.200 - 13208.025: 98.3034% ( 3) 00:30:40.838 13208.025 - 13308.849: 98.3235% ( 3) 00:30:40.838 13308.849 - 13409.674: 98.3637% ( 6) 00:30:40.838 13409.674 - 13510.498: 98.4174% ( 8) 00:30:40.838 13510.498 - 13611.323: 98.4844% ( 10) 00:30:40.838 13611.323 - 13712.148: 98.5381% ( 8) 00:30:40.838 13712.148 - 13812.972: 98.6655% ( 19) 00:30:40.838 13812.972 - 13913.797: 98.8600% ( 29) 00:30:40.838 13913.797 - 14014.622: 98.9270% ( 10) 00:30:40.838 14014.622 - 14115.446: 99.0545% ( 19) 00:30:40.838 14115.446 - 14216.271: 99.1349% ( 12) 00:30:40.838 14216.271 - 14317.095: 99.1416% ( 1) 00:30:40.838 17140.185 - 17241.009: 99.1483% ( 1) 00:30:40.838 17442.658 - 17543.483: 99.2422% ( 14) 00:30:40.838 17543.483 - 17644.308: 99.2892% ( 7) 00:30:40.838 17644.308 - 17745.132: 99.3160% ( 4) 00:30:40.838 17745.132 - 17845.957: 99.3361% ( 3) 00:30:40.838 17845.957 - 17946.782: 99.3562% ( 3) 00:30:40.838 17946.782 - 18047.606: 99.3830% ( 4) 00:30:40.838 18047.606 - 18148.431: 99.4099% ( 4) 00:30:40.838 18148.431 - 18249.255: 99.4300% ( 3) 00:30:40.838 18249.255 - 18350.080: 99.4568% ( 4) 00:30:40.838 18350.080 - 18450.905: 99.4769% ( 3) 00:30:40.838 18450.905 - 18551.729: 99.5038% ( 4) 00:30:40.838 18551.729 - 18652.554: 99.5239% ( 3) 00:30:40.838 18652.554 - 18753.378: 99.5507% ( 4) 00:30:40.838 18753.378 - 18854.203: 99.5708% ( 3) 00:30:40.838 20971.520 - 21072.345: 99.5976% ( 4) 00:30:40.838 21072.345 - 21173.169: 99.6915% ( 14) 00:30:40.838 21173.169 - 21273.994: 99.7519% ( 9) 00:30:40.838 21273.994 - 21374.818: 99.7586% ( 1) 00:30:40.838 21374.818 - 21475.643: 99.7653% ( 1) 00:30:40.838 21576.468 - 21677.292: 99.7787% ( 2) 00:30:40.838 21677.292 - 21778.117: 99.7921% ( 2) 00:30:40.838 21778.117 - 21878.942: 99.7988% ( 1) 00:30:40.838 21878.942 - 21979.766: 99.8122% ( 2) 00:30:40.838 21979.766 - 22080.591: 99.8189% ( 1) 00:30:40.838 22383.065 - 22483.889: 99.8323% ( 2) 00:30:40.838 22483.889 - 22584.714: 99.8525% ( 3) 00:30:40.838 22584.714 - 22685.538: 99.8793% ( 4) 00:30:40.838 22685.538 - 22786.363: 99.8994% ( 3) 00:30:40.838 22786.363 - 22887.188: 99.9262% ( 4) 00:30:40.838 22887.188 - 22988.012: 99.9464% ( 3) 00:30:40.838 22988.012 - 23088.837: 99.9732% ( 4) 00:30:40.838 23088.837 - 23189.662: 99.9933% ( 3) 00:30:40.838 23189.662 - 23290.486: 100.0000% ( 1) 00:30:40.838 00:30:40.838 23:12:21 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:40.838 00:30:40.838 real 0m2.534s 00:30:40.838 user 0m2.225s 00:30:40.838 sys 0m0.197s 00:30:40.838 23:12:21 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.838 23:12:21 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:30:40.838 ************************************ 00:30:40.838 END TEST nvme_perf 00:30:40.838 ************************************ 00:30:40.838 23:12:21 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:40.838 ************************************ 00:30:40.838 START TEST nvme_hello_world 00:30:40.838 ************************************ 00:30:40.838 23:12:21 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:40.838 Initializing NVMe Controllers 00:30:40.838 Attached to 0000:00:10.0 00:30:40.838 Namespace ID: 1 size: 6GB 00:30:40.838 Attached to 0000:00:11.0 00:30:40.838 Namespace ID: 1 size: 5GB 00:30:40.838 Attached to 0000:00:13.0 00:30:40.838 Namespace ID: 1 size: 1GB 00:30:40.838 Attached to 0000:00:12.0 00:30:40.838 Namespace ID: 1 size: 4GB 00:30:40.838 Namespace ID: 2 size: 4GB 00:30:40.838 Namespace ID: 3 size: 4GB 00:30:40.838 Initialization complete. 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 INFO: using host memory buffer for IO 00:30:40.838 Hello world! 00:30:40.838 00:30:40.838 real 0m0.218s 00:30:40.838 user 0m0.088s 00:30:40.838 sys 0m0.091s 00:30:40.838 23:12:21 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.838 23:12:21 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:40.838 ************************************ 00:30:40.838 END TEST nvme_hello_world 00:30:40.838 ************************************ 00:30:40.838 23:12:21 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.838 23:12:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:40.838 ************************************ 00:30:40.838 START TEST nvme_sgl 00:30:40.838 ************************************ 00:30:40.838 23:12:21 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:41.096 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:30:41.096 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:30:41.096 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:30:41.096 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:30:41.097 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:30:41.097 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:30:41.097 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:30:41.097 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:30:41.097 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:30:41.097 NVMe Readv/Writev Request test 00:30:41.097 Attached to 0000:00:10.0 00:30:41.097 Attached to 0000:00:11.0 00:30:41.097 Attached to 0000:00:13.0 00:30:41.097 Attached to 0000:00:12.0 00:30:41.097 0000:00:10.0: build_io_request_2 test passed 00:30:41.097 0000:00:10.0: build_io_request_4 test passed 00:30:41.097 0000:00:10.0: build_io_request_5 test passed 00:30:41.097 0000:00:10.0: build_io_request_6 test passed 00:30:41.097 0000:00:10.0: build_io_request_7 test passed 00:30:41.097 0000:00:10.0: build_io_request_10 test passed 00:30:41.097 0000:00:11.0: build_io_request_2 test passed 00:30:41.097 0000:00:11.0: build_io_request_4 test passed 00:30:41.097 0000:00:11.0: build_io_request_5 test passed 00:30:41.097 0000:00:11.0: build_io_request_6 test passed 00:30:41.097 0000:00:11.0: build_io_request_7 test passed 00:30:41.097 0000:00:11.0: build_io_request_10 test passed 00:30:41.097 Cleaning up... 00:30:41.097 00:30:41.097 real 0m0.289s 00:30:41.097 user 0m0.135s 00:30:41.097 sys 0m0.103s 00:30:41.097 23:12:21 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.097 23:12:21 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:30:41.097 ************************************ 00:30:41.097 END TEST nvme_sgl 00:30:41.097 ************************************ 00:30:41.097 23:12:21 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:41.097 23:12:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:41.097 23:12:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.097 23:12:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:41.097 ************************************ 00:30:41.097 START TEST nvme_e2edp 00:30:41.097 ************************************ 00:30:41.097 23:12:21 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:41.355 NVMe Write/Read with End-to-End data protection test 00:30:41.355 Attached to 0000:00:10.0 00:30:41.355 Attached to 0000:00:11.0 00:30:41.355 Attached to 0000:00:13.0 00:30:41.355 Attached to 0000:00:12.0 00:30:41.355 Cleaning up... 00:30:41.355 00:30:41.355 real 0m0.212s 00:30:41.355 user 0m0.067s 00:30:41.355 sys 0m0.099s 00:30:41.355 23:12:21 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.355 23:12:21 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:30:41.355 ************************************ 00:30:41.355 END TEST nvme_e2edp 00:30:41.355 ************************************ 00:30:41.355 23:12:21 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:41.355 23:12:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:41.355 23:12:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.355 23:12:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:41.355 ************************************ 00:30:41.355 START TEST nvme_reserve 00:30:41.355 ************************************ 00:30:41.355 23:12:21 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:41.616 ===================================================== 00:30:41.616 NVMe Controller at PCI bus 0, device 16, function 0 00:30:41.616 ===================================================== 00:30:41.616 Reservations: Not Supported 00:30:41.616 ===================================================== 00:30:41.616 NVMe Controller at PCI bus 0, device 17, function 0 00:30:41.616 ===================================================== 00:30:41.616 Reservations: Not Supported 00:30:41.616 ===================================================== 00:30:41.616 NVMe Controller at PCI bus 0, device 19, function 0 00:30:41.616 ===================================================== 00:30:41.616 Reservations: Not Supported 00:30:41.616 ===================================================== 00:30:41.616 NVMe Controller at PCI bus 0, device 18, function 0 00:30:41.616 ===================================================== 00:30:41.616 Reservations: Not Supported 00:30:41.616 Reservation test passed 00:30:41.616 00:30:41.616 real 0m0.219s 00:30:41.616 user 0m0.080s 00:30:41.616 sys 0m0.093s 00:30:41.616 23:12:22 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.616 23:12:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:30:41.616 ************************************ 00:30:41.616 END TEST nvme_reserve 00:30:41.616 ************************************ 00:30:41.616 23:12:22 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:41.616 23:12:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:41.616 23:12:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.616 23:12:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:41.616 ************************************ 00:30:41.616 START TEST nvme_err_injection 00:30:41.616 ************************************ 00:30:41.616 23:12:22 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:41.967 NVMe Error Injection test 00:30:41.967 Attached to 0000:00:10.0 00:30:41.967 Attached to 0000:00:11.0 00:30:41.967 Attached to 0000:00:13.0 00:30:41.967 Attached to 0000:00:12.0 00:30:41.967 0000:00:11.0: get features failed as expected 00:30:41.967 0000:00:13.0: get features failed as expected 00:30:41.967 0000:00:12.0: get features failed as expected 00:30:41.967 0000:00:10.0: get features failed as expected 00:30:41.967 0000:00:10.0: get features successfully as expected 00:30:41.967 0000:00:11.0: get features successfully as expected 00:30:41.967 0000:00:13.0: get features successfully as expected 00:30:41.967 0000:00:12.0: get features successfully as expected 00:30:41.967 0000:00:10.0: read failed as expected 00:30:41.967 0000:00:11.0: read failed as expected 00:30:41.967 0000:00:13.0: read failed as expected 00:30:41.967 0000:00:12.0: read failed as expected 00:30:41.967 0000:00:10.0: read successfully as expected 00:30:41.967 0000:00:11.0: read successfully as expected 00:30:41.967 0000:00:13.0: read successfully as expected 00:30:41.967 0000:00:12.0: read successfully as expected 00:30:41.967 Cleaning up... 00:30:41.967 00:30:41.967 real 0m0.232s 00:30:41.967 user 0m0.083s 00:30:41.967 sys 0m0.098s 00:30:41.967 23:12:22 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.967 23:12:22 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:30:41.967 ************************************ 00:30:41.967 END TEST nvme_err_injection 00:30:41.967 ************************************ 00:30:41.967 23:12:22 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:41.967 23:12:22 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:30:41.967 23:12:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.967 23:12:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:41.967 ************************************ 00:30:41.967 START TEST nvme_overhead 00:30:41.967 ************************************ 00:30:41.967 23:12:22 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:43.342 Initializing NVMe Controllers 00:30:43.342 Attached to 0000:00:10.0 00:30:43.342 Attached to 0000:00:11.0 00:30:43.342 Attached to 0000:00:13.0 00:30:43.342 Attached to 0000:00:12.0 00:30:43.342 Initialization complete. Launching workers. 00:30:43.342 submit (in ns) avg, min, max = 11681.5, 9708.5, 83895.4 00:30:43.342 complete (in ns) avg, min, max = 7736.8, 7172.3, 83521.5 00:30:43.342 00:30:43.342 Submit histogram 00:30:43.342 ================ 00:30:43.342 Range in us Cumulative Count 00:30:43.342 9.698 - 9.748: 0.0064% ( 1) 00:30:43.342 10.191 - 10.240: 0.0127% ( 1) 00:30:43.342 10.240 - 10.289: 0.0191% ( 1) 00:30:43.342 10.634 - 10.683: 0.0509% ( 5) 00:30:43.342 10.683 - 10.732: 0.1400% ( 14) 00:30:43.342 10.732 - 10.782: 0.5090% ( 58) 00:30:43.342 10.782 - 10.831: 1.5714% ( 167) 00:30:43.342 10.831 - 10.880: 3.7982% ( 350) 00:30:43.342 10.880 - 10.929: 7.8063% ( 630) 00:30:43.342 10.929 - 10.978: 13.5768% ( 907) 00:30:43.342 10.978 - 11.028: 21.3513% ( 1222) 00:30:43.342 11.028 - 11.077: 30.5764% ( 1450) 00:30:43.342 11.077 - 11.126: 40.5013% ( 1560) 00:30:43.342 11.126 - 11.175: 48.6767% ( 1285) 00:30:43.342 11.175 - 11.225: 56.1076% ( 1168) 00:30:43.342 11.225 - 11.274: 61.6872% ( 877) 00:30:43.342 11.274 - 11.323: 66.0071% ( 679) 00:30:43.342 11.323 - 11.372: 69.0737% ( 482) 00:30:43.342 11.372 - 11.422: 71.2877% ( 348) 00:30:43.342 11.422 - 11.471: 72.9991% ( 269) 00:30:43.342 11.471 - 11.520: 74.5006% ( 236) 00:30:43.342 11.520 - 11.569: 75.7221% ( 192) 00:30:43.342 11.569 - 11.618: 76.8291% ( 174) 00:30:43.342 11.618 - 11.668: 77.9934% ( 183) 00:30:43.342 11.668 - 11.717: 79.0686% ( 169) 00:30:43.342 11.717 - 11.766: 80.1947% ( 177) 00:30:43.342 11.766 - 11.815: 81.0727% ( 138) 00:30:43.342 11.815 - 11.865: 82.2497% ( 185) 00:30:43.342 11.865 - 11.914: 83.3948% ( 180) 00:30:43.342 11.914 - 11.963: 84.7054% ( 206) 00:30:43.342 11.963 - 12.012: 86.0097% ( 205) 00:30:43.342 12.012 - 12.062: 87.1294% ( 176) 00:30:43.342 12.062 - 12.111: 88.1346% ( 158) 00:30:43.342 12.111 - 12.160: 89.0762% ( 148) 00:30:43.342 12.160 - 12.209: 89.8842% ( 127) 00:30:43.342 12.209 - 12.258: 90.5586% ( 106) 00:30:43.342 12.258 - 12.308: 91.1694% ( 96) 00:30:43.342 12.308 - 12.357: 91.7292% ( 88) 00:30:43.342 12.357 - 12.406: 92.1873% ( 72) 00:30:43.342 12.406 - 12.455: 92.5245% ( 53) 00:30:43.342 12.455 - 12.505: 92.7026% ( 28) 00:30:43.342 12.505 - 12.554: 92.8490% ( 23) 00:30:43.342 12.554 - 12.603: 93.0080% ( 25) 00:30:43.342 12.603 - 12.702: 93.2434% ( 37) 00:30:43.342 12.702 - 12.800: 93.3770% ( 21) 00:30:43.342 12.800 - 12.898: 93.4852% ( 17) 00:30:43.342 12.898 - 12.997: 93.5742% ( 14) 00:30:43.342 12.997 - 13.095: 93.6697% ( 15) 00:30:43.342 13.095 - 13.194: 93.8478% ( 28) 00:30:43.342 13.194 - 13.292: 93.9432% ( 15) 00:30:43.342 13.292 - 13.391: 94.0578% ( 18) 00:30:43.342 13.391 - 13.489: 94.1723% ( 18) 00:30:43.342 13.489 - 13.588: 94.2804% ( 17) 00:30:43.342 13.588 - 13.686: 94.3695% ( 14) 00:30:43.342 13.686 - 13.785: 94.4649% ( 15) 00:30:43.342 13.785 - 13.883: 94.6304% ( 26) 00:30:43.342 13.883 - 13.982: 94.7576% ( 20) 00:30:43.342 13.982 - 14.080: 94.8530% ( 15) 00:30:43.342 14.080 - 14.178: 94.9548% ( 16) 00:30:43.342 14.178 - 14.277: 95.0248% ( 11) 00:30:43.342 14.277 - 14.375: 95.1266% ( 16) 00:30:43.342 14.375 - 14.474: 95.2729% ( 23) 00:30:43.342 14.474 - 14.572: 95.4256% ( 24) 00:30:43.342 14.572 - 14.671: 95.5656% ( 22) 00:30:43.342 14.671 - 14.769: 95.7437% ( 28) 00:30:43.342 14.769 - 14.868: 95.9219% ( 28) 00:30:43.342 14.868 - 14.966: 96.0618% ( 22) 00:30:43.342 14.966 - 15.065: 96.2209% ( 25) 00:30:43.342 15.065 - 15.163: 96.3290% ( 17) 00:30:43.342 15.163 - 15.262: 96.4690% ( 22) 00:30:43.342 15.262 - 15.360: 96.6026% ( 21) 00:30:43.342 15.360 - 15.458: 96.7044% ( 16) 00:30:43.342 15.458 - 15.557: 96.8062% ( 16) 00:30:43.342 15.557 - 15.655: 96.8635% ( 9) 00:30:43.342 15.655 - 15.754: 96.9398% ( 12) 00:30:43.342 15.754 - 15.852: 96.9843% ( 7) 00:30:43.342 15.852 - 15.951: 97.0480% ( 10) 00:30:43.342 15.951 - 16.049: 97.1307% ( 13) 00:30:43.342 16.049 - 16.148: 97.1689% ( 6) 00:30:43.342 16.148 - 16.246: 97.2261% ( 9) 00:30:43.342 16.246 - 16.345: 97.3088% ( 13) 00:30:43.342 16.345 - 16.443: 97.3534% ( 7) 00:30:43.342 16.443 - 16.542: 97.4042% ( 8) 00:30:43.342 16.542 - 16.640: 97.4806% ( 12) 00:30:43.342 16.640 - 16.738: 97.5379% ( 9) 00:30:43.342 16.738 - 16.837: 97.6269% ( 14) 00:30:43.342 16.837 - 16.935: 97.6778% ( 8) 00:30:43.342 16.935 - 17.034: 97.7224% ( 7) 00:30:43.342 17.034 - 17.132: 97.7923% ( 11) 00:30:43.342 17.132 - 17.231: 97.8432% ( 8) 00:30:43.342 17.231 - 17.329: 97.8814% ( 6) 00:30:43.342 17.329 - 17.428: 97.9196% ( 6) 00:30:43.342 17.428 - 17.526: 97.9705% ( 8) 00:30:43.342 17.526 - 17.625: 98.0405% ( 11) 00:30:43.342 17.625 - 17.723: 98.1168% ( 12) 00:30:43.342 17.723 - 17.822: 98.1550% ( 6) 00:30:43.342 17.822 - 17.920: 98.2441% ( 14) 00:30:43.342 17.920 - 18.018: 98.3013% ( 9) 00:30:43.342 18.018 - 18.117: 98.3395% ( 6) 00:30:43.342 18.117 - 18.215: 98.4222% ( 13) 00:30:43.342 18.215 - 18.314: 98.4476% ( 4) 00:30:43.342 18.314 - 18.412: 98.4922% ( 7) 00:30:43.342 18.412 - 18.511: 98.5494% ( 9) 00:30:43.342 18.511 - 18.609: 98.6003% ( 8) 00:30:43.342 18.609 - 18.708: 98.6385% ( 6) 00:30:43.342 18.708 - 18.806: 98.6640% ( 4) 00:30:43.342 18.806 - 18.905: 98.7021% ( 6) 00:30:43.342 18.905 - 19.003: 98.7467% ( 7) 00:30:43.342 19.003 - 19.102: 98.7594% ( 2) 00:30:43.342 19.102 - 19.200: 98.7721% ( 2) 00:30:43.342 19.200 - 19.298: 98.7785% ( 1) 00:30:43.343 19.298 - 19.397: 98.7912% ( 2) 00:30:43.343 19.397 - 19.495: 98.8039% ( 2) 00:30:43.343 19.495 - 19.594: 98.8103% ( 1) 00:30:43.343 19.594 - 19.692: 98.8230% ( 2) 00:30:43.343 19.692 - 19.791: 98.8294% ( 1) 00:30:43.343 19.791 - 19.889: 98.8357% ( 1) 00:30:43.343 19.889 - 19.988: 98.8485% ( 2) 00:30:43.343 20.086 - 20.185: 98.8675% ( 3) 00:30:43.343 20.283 - 20.382: 98.8866% ( 3) 00:30:43.343 20.382 - 20.480: 98.8930% ( 1) 00:30:43.343 20.480 - 20.578: 98.8994% ( 1) 00:30:43.343 20.578 - 20.677: 98.9057% ( 1) 00:30:43.343 20.677 - 20.775: 98.9121% ( 1) 00:30:43.343 20.775 - 20.874: 98.9439% ( 5) 00:30:43.343 20.874 - 20.972: 98.9502% ( 1) 00:30:43.343 21.366 - 21.465: 98.9566% ( 1) 00:30:43.343 21.563 - 21.662: 98.9693% ( 2) 00:30:43.343 21.662 - 21.760: 98.9821% ( 2) 00:30:43.343 21.760 - 21.858: 98.9948% ( 2) 00:30:43.343 21.858 - 21.957: 99.0075% ( 2) 00:30:43.343 21.957 - 22.055: 99.0139% ( 1) 00:30:43.343 22.154 - 22.252: 99.0202% ( 1) 00:30:43.343 22.252 - 22.351: 99.0266% ( 1) 00:30:43.343 22.449 - 22.548: 99.0330% ( 1) 00:30:43.343 22.646 - 22.745: 99.0393% ( 1) 00:30:43.343 22.745 - 22.843: 99.0457% ( 1) 00:30:43.343 22.942 - 23.040: 99.0584% ( 2) 00:30:43.343 23.040 - 23.138: 99.0902% ( 5) 00:30:43.343 23.138 - 23.237: 99.1475% ( 9) 00:30:43.343 23.237 - 23.335: 99.2175% ( 11) 00:30:43.343 23.335 - 23.434: 99.2938% ( 12) 00:30:43.343 23.434 - 23.532: 99.3892% ( 15) 00:30:43.343 23.532 - 23.631: 99.4338% ( 7) 00:30:43.343 23.631 - 23.729: 99.4910% ( 9) 00:30:43.343 23.729 - 23.828: 99.5356% ( 7) 00:30:43.343 23.828 - 23.926: 99.5547% ( 3) 00:30:43.343 23.926 - 24.025: 99.5865% ( 5) 00:30:43.343 24.025 - 24.123: 99.6119% ( 4) 00:30:43.343 24.123 - 24.222: 99.6374% ( 4) 00:30:43.343 24.222 - 24.320: 99.6628% ( 4) 00:30:43.343 24.320 - 24.418: 99.6755% ( 2) 00:30:43.343 24.418 - 24.517: 99.6819% ( 1) 00:30:43.343 24.517 - 24.615: 99.7073% ( 4) 00:30:43.343 24.615 - 24.714: 99.7201% ( 2) 00:30:43.343 24.714 - 24.812: 99.7264% ( 1) 00:30:43.343 24.812 - 24.911: 99.7328% ( 1) 00:30:43.343 24.911 - 25.009: 99.7455% ( 2) 00:30:43.343 25.009 - 25.108: 99.7519% ( 1) 00:30:43.343 25.403 - 25.600: 99.7582% ( 1) 00:30:43.343 25.797 - 25.994: 99.7837% ( 4) 00:30:43.343 25.994 - 26.191: 99.8028% ( 3) 00:30:43.343 26.191 - 26.388: 99.8091% ( 1) 00:30:43.343 26.585 - 26.782: 99.8155% ( 1) 00:30:43.343 26.782 - 26.978: 99.8219% ( 1) 00:30:43.343 27.372 - 27.569: 99.8346% ( 2) 00:30:43.343 27.766 - 27.963: 99.8409% ( 1) 00:30:43.343 27.963 - 28.160: 99.8537% ( 2) 00:30:43.343 28.160 - 28.357: 99.8600% ( 1) 00:30:43.343 28.554 - 28.751: 99.8664% ( 1) 00:30:43.343 29.145 - 29.342: 99.8728% ( 1) 00:30:43.343 30.129 - 30.326: 99.8791% ( 1) 00:30:43.343 33.083 - 33.280: 99.8855% ( 1) 00:30:43.343 33.871 - 34.068: 99.8918% ( 1) 00:30:43.343 34.855 - 35.052: 99.8982% ( 1) 00:30:43.343 37.022 - 37.218: 99.9046% ( 1) 00:30:43.343 37.415 - 37.612: 99.9109% ( 1) 00:30:43.343 37.809 - 38.006: 99.9173% ( 1) 00:30:43.343 38.597 - 38.794: 99.9237% ( 1) 00:30:43.343 39.582 - 39.778: 99.9300% ( 1) 00:30:43.343 39.975 - 40.172: 99.9364% ( 1) 00:30:43.343 40.172 - 40.369: 99.9427% ( 1) 00:30:43.343 41.157 - 41.354: 99.9491% ( 1) 00:30:43.343 44.898 - 45.095: 99.9555% ( 1) 00:30:43.343 47.262 - 47.458: 99.9618% ( 1) 00:30:43.343 48.246 - 48.443: 99.9682% ( 1) 00:30:43.343 48.640 - 48.837: 99.9746% ( 1) 00:30:43.343 50.215 - 50.412: 99.9809% ( 1) 00:30:43.343 53.957 - 54.351: 99.9873% ( 1) 00:30:43.343 71.680 - 72.074: 99.9936% ( 1) 00:30:43.343 83.889 - 84.283: 100.0000% ( 1) 00:30:43.343 00:30:43.343 Complete histogram 00:30:43.343 ================== 00:30:43.343 Range in us Cumulative Count 00:30:43.343 7.138 - 7.188: 0.0254% ( 4) 00:30:43.343 7.188 - 7.237: 0.5790% ( 87) 00:30:43.343 7.237 - 7.286: 3.8745% ( 518) 00:30:43.343 7.286 - 7.335: 16.1916% ( 1936) 00:30:43.343 7.335 - 7.385: 39.2416% ( 3623) 00:30:43.343 7.385 - 7.434: 60.1921% ( 3293) 00:30:43.343 7.434 - 7.483: 73.9025% ( 2155) 00:30:43.343 7.483 - 7.532: 81.2699% ( 1158) 00:30:43.343 7.532 - 7.582: 85.6089% ( 682) 00:30:43.343 7.582 - 7.631: 88.3255% ( 427) 00:30:43.343 7.631 - 7.680: 89.5916% ( 199) 00:30:43.343 7.680 - 7.729: 90.3868% ( 125) 00:30:43.343 7.729 - 7.778: 90.8576% ( 74) 00:30:43.343 7.778 - 7.828: 91.1884% ( 52) 00:30:43.343 7.828 - 7.877: 91.3666% ( 28) 00:30:43.343 7.877 - 7.926: 91.5511% ( 29) 00:30:43.343 7.926 - 7.975: 91.7674% ( 34) 00:30:43.343 7.975 - 8.025: 92.0028% ( 37) 00:30:43.343 8.025 - 8.074: 92.1746% ( 27) 00:30:43.343 8.074 - 8.123: 92.3654% ( 30) 00:30:43.343 8.123 - 8.172: 92.5945% ( 36) 00:30:43.343 8.172 - 8.222: 92.9698% ( 59) 00:30:43.343 8.222 - 8.271: 93.4725% ( 79) 00:30:43.343 8.271 - 8.320: 93.9369% ( 73) 00:30:43.343 8.320 - 8.369: 94.3059% ( 58) 00:30:43.343 8.369 - 8.418: 94.4840% ( 28) 00:30:43.343 8.418 - 8.468: 94.5985% ( 18) 00:30:43.343 8.468 - 8.517: 94.7258% ( 20) 00:30:43.343 8.517 - 8.566: 94.7831% ( 9) 00:30:43.343 8.566 - 8.615: 94.8085% ( 4) 00:30:43.343 8.615 - 8.665: 94.8658% ( 9) 00:30:43.343 8.665 - 8.714: 94.9294% ( 10) 00:30:43.343 8.714 - 8.763: 94.9930% ( 10) 00:30:43.343 8.763 - 8.812: 95.0693% ( 12) 00:30:43.343 8.812 - 8.862: 95.1139% ( 7) 00:30:43.343 8.862 - 8.911: 95.1393% ( 4) 00:30:43.343 8.911 - 8.960: 95.1775% ( 6) 00:30:43.343 8.960 - 9.009: 95.2093% ( 5) 00:30:43.343 9.009 - 9.058: 95.2411% ( 5) 00:30:43.343 9.058 - 9.108: 95.2602% ( 3) 00:30:43.343 9.108 - 9.157: 95.2857% ( 4) 00:30:43.343 9.157 - 9.206: 95.3047% ( 3) 00:30:43.343 9.255 - 9.305: 95.3111% ( 1) 00:30:43.343 9.305 - 9.354: 95.3302% ( 3) 00:30:43.343 9.354 - 9.403: 95.3493% ( 3) 00:30:43.343 9.403 - 9.452: 95.3684% ( 3) 00:30:43.343 9.452 - 9.502: 95.3811% ( 2) 00:30:43.343 9.502 - 9.551: 95.3938% ( 2) 00:30:43.343 9.551 - 9.600: 95.4193% ( 4) 00:30:43.343 9.600 - 9.649: 95.4256% ( 1) 00:30:43.343 9.649 - 9.698: 95.4447% ( 3) 00:30:43.343 9.698 - 9.748: 95.4511% ( 1) 00:30:43.344 9.748 - 9.797: 95.5020% ( 8) 00:30:43.344 9.797 - 9.846: 95.5338% ( 5) 00:30:43.344 9.846 - 9.895: 95.5910% ( 9) 00:30:43.344 9.895 - 9.945: 95.6419% ( 8) 00:30:43.344 9.945 - 9.994: 95.7183% ( 12) 00:30:43.344 9.994 - 10.043: 95.7565% ( 6) 00:30:43.344 10.043 - 10.092: 95.8837% ( 20) 00:30:43.344 10.092 - 10.142: 95.9282% ( 7) 00:30:43.344 10.142 - 10.191: 96.0555% ( 20) 00:30:43.344 10.191 - 10.240: 96.1573% ( 16) 00:30:43.344 10.240 - 10.289: 96.2654% ( 17) 00:30:43.344 10.289 - 10.338: 96.3163% ( 8) 00:30:43.344 10.338 - 10.388: 96.3863% ( 11) 00:30:43.344 10.388 - 10.437: 96.4817% ( 15) 00:30:43.344 10.437 - 10.486: 96.5454% ( 10) 00:30:43.344 10.486 - 10.535: 96.6281% ( 13) 00:30:43.344 10.535 - 10.585: 96.7108% ( 13) 00:30:43.344 10.585 - 10.634: 96.7490% ( 6) 00:30:43.344 10.634 - 10.683: 96.8571% ( 17) 00:30:43.344 10.683 - 10.732: 96.9589% ( 16) 00:30:43.344 10.732 - 10.782: 96.9907% ( 5) 00:30:43.344 10.782 - 10.831: 97.0543% ( 10) 00:30:43.344 10.831 - 10.880: 97.1116% ( 9) 00:30:43.344 10.880 - 10.929: 97.1689% ( 9) 00:30:43.344 10.929 - 10.978: 97.2197% ( 8) 00:30:43.344 10.978 - 11.028: 97.2452% ( 4) 00:30:43.344 11.028 - 11.077: 97.2897% ( 7) 00:30:43.344 11.077 - 11.126: 97.3279% ( 6) 00:30:43.344 11.126 - 11.175: 97.3406% ( 2) 00:30:43.344 11.175 - 11.225: 97.3724% ( 5) 00:30:43.344 11.225 - 11.274: 97.4170% ( 7) 00:30:43.344 11.274 - 11.323: 97.4424% ( 4) 00:30:43.344 11.323 - 11.372: 97.4615% ( 3) 00:30:43.344 11.372 - 11.422: 97.4997% ( 6) 00:30:43.344 11.422 - 11.471: 97.5315% ( 5) 00:30:43.344 11.471 - 11.520: 97.5442% ( 2) 00:30:43.344 11.520 - 11.569: 97.5697% ( 4) 00:30:43.344 11.569 - 11.618: 97.6015% ( 5) 00:30:43.344 11.618 - 11.668: 97.6142% ( 2) 00:30:43.344 11.668 - 11.717: 97.6333% ( 3) 00:30:43.344 11.717 - 11.766: 97.6460% ( 2) 00:30:43.344 11.766 - 11.815: 97.6587% ( 2) 00:30:43.344 11.815 - 11.865: 97.6651% ( 1) 00:30:43.344 11.865 - 11.914: 97.6778% ( 2) 00:30:43.344 11.914 - 11.963: 97.6842% ( 1) 00:30:43.344 11.963 - 12.012: 97.6905% ( 1) 00:30:43.344 12.012 - 12.062: 97.7033% ( 2) 00:30:43.344 12.062 - 12.111: 97.7096% ( 1) 00:30:43.344 12.160 - 12.209: 97.7224% ( 2) 00:30:43.344 12.308 - 12.357: 97.7287% ( 1) 00:30:43.344 12.357 - 12.406: 97.7351% ( 1) 00:30:43.344 12.455 - 12.505: 97.7542% ( 3) 00:30:43.344 12.702 - 12.800: 97.7605% ( 1) 00:30:43.344 12.800 - 12.898: 97.7796% ( 3) 00:30:43.344 12.898 - 12.997: 97.8305% ( 8) 00:30:43.344 12.997 - 13.095: 97.8560% ( 4) 00:30:43.344 13.095 - 13.194: 97.9069% ( 8) 00:30:43.344 13.194 - 13.292: 97.9323% ( 4) 00:30:43.344 13.292 - 13.391: 97.9641% ( 5) 00:30:43.344 13.391 - 13.489: 98.0023% ( 6) 00:30:43.344 13.489 - 13.588: 98.0405% ( 6) 00:30:43.344 13.588 - 13.686: 98.1041% ( 10) 00:30:43.344 13.686 - 13.785: 98.1868% ( 13) 00:30:43.344 13.785 - 13.883: 98.2122% ( 4) 00:30:43.344 13.883 - 13.982: 98.3013% ( 14) 00:30:43.344 13.982 - 14.080: 98.3522% ( 8) 00:30:43.344 14.080 - 14.178: 98.3967% ( 7) 00:30:43.344 14.178 - 14.277: 98.4158% ( 3) 00:30:43.344 14.277 - 14.375: 98.4540% ( 6) 00:30:43.344 14.375 - 14.474: 98.5176% ( 10) 00:30:43.344 14.474 - 14.572: 98.5749% ( 9) 00:30:43.344 14.572 - 14.671: 98.6449% ( 11) 00:30:43.344 14.671 - 14.769: 98.7021% ( 9) 00:30:43.344 14.769 - 14.868: 98.7785% ( 12) 00:30:43.344 14.868 - 14.966: 98.8166% ( 6) 00:30:43.344 14.966 - 15.065: 98.8294% ( 2) 00:30:43.344 15.065 - 15.163: 98.8357% ( 1) 00:30:43.344 15.163 - 15.262: 98.8548% ( 3) 00:30:43.344 15.262 - 15.360: 98.8675% ( 2) 00:30:43.344 15.360 - 15.458: 98.8930% ( 4) 00:30:43.344 15.458 - 15.557: 98.9121% ( 3) 00:30:43.344 15.557 - 15.655: 98.9184% ( 1) 00:30:43.344 15.754 - 15.852: 98.9248% ( 1) 00:30:43.344 15.951 - 16.049: 98.9439% ( 3) 00:30:43.344 16.049 - 16.148: 98.9630% ( 3) 00:30:43.344 16.148 - 16.246: 99.0393% ( 12) 00:30:43.344 16.246 - 16.345: 99.1220% ( 13) 00:30:43.344 16.345 - 16.443: 99.2747% ( 24) 00:30:43.344 16.443 - 16.542: 99.3829% ( 17) 00:30:43.344 16.542 - 16.640: 99.5292% ( 23) 00:30:43.344 16.640 - 16.738: 99.5737% ( 7) 00:30:43.344 16.738 - 16.837: 99.5801% ( 1) 00:30:43.344 16.837 - 16.935: 99.6055% ( 4) 00:30:43.344 17.034 - 17.132: 99.6119% ( 1) 00:30:43.344 17.132 - 17.231: 99.6183% ( 1) 00:30:43.344 17.231 - 17.329: 99.6310% ( 2) 00:30:43.344 17.329 - 17.428: 99.6374% ( 1) 00:30:43.344 17.428 - 17.526: 99.6501% ( 2) 00:30:43.344 17.526 - 17.625: 99.6692% ( 3) 00:30:43.344 17.625 - 17.723: 99.6755% ( 1) 00:30:43.344 17.822 - 17.920: 99.6819% ( 1) 00:30:43.344 18.018 - 18.117: 99.6883% ( 1) 00:30:43.344 18.117 - 18.215: 99.6946% ( 1) 00:30:43.344 18.511 - 18.609: 99.7010% ( 1) 00:30:43.344 18.609 - 18.708: 99.7073% ( 1) 00:30:43.344 19.003 - 19.102: 99.7201% ( 2) 00:30:43.344 19.102 - 19.200: 99.7264% ( 1) 00:30:43.344 19.200 - 19.298: 99.7328% ( 1) 00:30:43.344 19.298 - 19.397: 99.7455% ( 2) 00:30:43.344 19.397 - 19.495: 99.7519% ( 1) 00:30:43.344 19.495 - 19.594: 99.7582% ( 1) 00:30:43.344 19.791 - 19.889: 99.7646% ( 1) 00:30:43.344 19.889 - 19.988: 99.7710% ( 1) 00:30:43.344 19.988 - 20.086: 99.7773% ( 1) 00:30:43.344 20.185 - 20.283: 99.7837% ( 1) 00:30:43.344 20.480 - 20.578: 99.7900% ( 1) 00:30:43.344 20.874 - 20.972: 99.7964% ( 1) 00:30:43.344 21.071 - 21.169: 99.8028% ( 1) 00:30:43.344 21.169 - 21.268: 99.8091% ( 1) 00:30:43.344 21.366 - 21.465: 99.8155% ( 1) 00:30:43.344 21.760 - 21.858: 99.8219% ( 1) 00:30:43.344 21.957 - 22.055: 99.8282% ( 1) 00:30:43.344 22.252 - 22.351: 99.8346% ( 1) 00:30:43.344 22.548 - 22.646: 99.8409% ( 1) 00:30:43.344 22.646 - 22.745: 99.8473% ( 1) 00:30:43.344 23.040 - 23.138: 99.8600% ( 2) 00:30:43.344 23.532 - 23.631: 99.8728% ( 2) 00:30:43.344 23.729 - 23.828: 99.8855% ( 2) 00:30:43.344 24.222 - 24.320: 99.8918% ( 1) 00:30:43.344 24.320 - 24.418: 99.9046% ( 2) 00:30:43.344 24.418 - 24.517: 99.9109% ( 1) 00:30:43.344 25.009 - 25.108: 99.9173% ( 1) 00:30:43.344 26.782 - 26.978: 99.9237% ( 1) 00:30:43.344 28.948 - 29.145: 99.9300% ( 1) 00:30:43.344 32.098 - 32.295: 99.9364% ( 1) 00:30:43.344 32.689 - 32.886: 99.9427% ( 1) 00:30:43.344 32.886 - 33.083: 99.9491% ( 1) 00:30:43.344 33.280 - 33.477: 99.9555% ( 1) 00:30:43.344 39.975 - 40.172: 99.9618% ( 1) 00:30:43.344 42.338 - 42.535: 99.9682% ( 1) 00:30:43.344 42.535 - 42.732: 99.9746% ( 1) 00:30:43.344 53.957 - 54.351: 99.9809% ( 1) 00:30:43.344 55.532 - 55.926: 99.9873% ( 1) 00:30:43.344 70.105 - 70.498: 99.9936% ( 1) 00:30:43.344 83.495 - 83.889: 100.0000% ( 1) 00:30:43.344 00:30:43.344 00:30:43.344 real 0m1.221s 00:30:43.344 user 0m1.070s 00:30:43.344 sys 0m0.101s 00:30:43.344 23:12:23 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.344 23:12:23 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:30:43.344 ************************************ 00:30:43.344 END TEST nvme_overhead 00:30:43.344 ************************************ 00:30:43.344 23:12:23 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:43.344 23:12:23 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:43.344 23:12:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.344 23:12:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:43.344 ************************************ 00:30:43.344 START TEST nvme_arbitration 00:30:43.344 ************************************ 00:30:43.344 23:12:23 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:46.621 Initializing NVMe Controllers 00:30:46.621 Attached to 0000:00:10.0 00:30:46.621 Attached to 0000:00:11.0 00:30:46.621 Attached to 0000:00:13.0 00:30:46.621 Attached to 0000:00:12.0 00:30:46.621 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:46.621 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:30:46.621 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:30:46.621 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:30:46.621 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:30:46.621 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:30:46.621 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:46.621 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:46.621 Initialization complete. Launching workers. 00:30:46.621 Starting thread on core 1 with urgent priority queue 00:30:46.621 Starting thread on core 2 with urgent priority queue 00:30:46.621 Starting thread on core 3 with urgent priority queue 00:30:46.621 Starting thread on core 0 with urgent priority queue 00:30:46.621 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:30:46.621 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:30:46.621 QEMU NVMe Ctrl (12341 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:30:46.621 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:30:46.621 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:30:46.621 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:30:46.621 ======================================================== 00:30:46.621 00:30:46.621 00:30:46.621 real 0m3.297s 00:30:46.621 user 0m9.213s 00:30:46.621 sys 0m0.121s 00:30:46.621 23:12:27 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.621 23:12:27 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:30:46.621 ************************************ 00:30:46.621 END TEST nvme_arbitration 00:30:46.621 ************************************ 00:30:46.621 23:12:27 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:46.621 23:12:27 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:46.621 23:12:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.621 23:12:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:46.621 ************************************ 00:30:46.621 START TEST nvme_single_aen 00:30:46.621 ************************************ 00:30:46.621 23:12:27 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:46.621 Asynchronous Event Request test 00:30:46.621 Attached to 0000:00:10.0 00:30:46.621 Attached to 0000:00:11.0 00:30:46.621 Attached to 0000:00:13.0 00:30:46.621 Attached to 0000:00:12.0 00:30:46.621 Reset controller to setup AER completions for this process 00:30:46.621 Registering asynchronous event callbacks... 00:30:46.621 Getting orig temperature thresholds of all controllers 00:30:46.621 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:46.621 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:46.621 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:46.621 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:46.621 Setting all controllers temperature threshold low to trigger AER 00:30:46.621 Waiting for all controllers temperature threshold to be set lower 00:30:46.621 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:46.621 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:30:46.621 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:46.621 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:30:46.621 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:46.621 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:30:46.621 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:46.621 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:30:46.621 Waiting for all controllers to trigger AER and reset threshold 00:30:46.621 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.621 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.621 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.621 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.621 Cleaning up... 00:30:46.879 00:30:46.879 real 0m0.216s 00:30:46.879 user 0m0.073s 00:30:46.879 sys 0m0.095s 00:30:46.879 23:12:27 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.879 23:12:27 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:30:46.879 ************************************ 00:30:46.879 END TEST nvme_single_aen 00:30:46.879 ************************************ 00:30:46.879 23:12:27 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:46.879 23:12:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:46.879 23:12:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.879 23:12:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:46.879 ************************************ 00:30:46.879 START TEST nvme_doorbell_aers 00:30:46.879 ************************************ 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:46.879 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:30:46.880 23:12:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:46.880 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:46.880 23:12:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:30:47.137 [2024-12-09 23:12:27.558051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:30:57.125 Executing: test_write_invalid_db 00:30:57.125 Waiting for AER completion... 00:30:57.125 Failure: test_write_invalid_db 00:30:57.125 00:30:57.125 Executing: test_invalid_db_write_overflow_sq 00:30:57.125 Waiting for AER completion... 00:30:57.125 Failure: test_invalid_db_write_overflow_sq 00:30:57.125 00:30:57.125 Executing: test_invalid_db_write_overflow_cq 00:30:57.125 Waiting for AER completion... 00:30:57.125 Failure: test_invalid_db_write_overflow_cq 00:30:57.125 00:30:57.125 23:12:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:57.125 23:12:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:30:57.125 [2024-12-09 23:12:37.588902] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:07.089 Executing: test_write_invalid_db 00:31:07.089 Waiting for AER completion... 00:31:07.089 Failure: test_write_invalid_db 00:31:07.089 00:31:07.089 Executing: test_invalid_db_write_overflow_sq 00:31:07.089 Waiting for AER completion... 00:31:07.089 Failure: test_invalid_db_write_overflow_sq 00:31:07.089 00:31:07.089 Executing: test_invalid_db_write_overflow_cq 00:31:07.089 Waiting for AER completion... 00:31:07.089 Failure: test_invalid_db_write_overflow_cq 00:31:07.089 00:31:07.089 23:12:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:07.089 23:12:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:07.089 [2024-12-09 23:12:47.639260] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:17.061 Executing: test_write_invalid_db 00:31:17.061 Waiting for AER completion... 00:31:17.061 Failure: test_write_invalid_db 00:31:17.061 00:31:17.061 Executing: test_invalid_db_write_overflow_sq 00:31:17.061 Waiting for AER completion... 00:31:17.061 Failure: test_invalid_db_write_overflow_sq 00:31:17.061 00:31:17.061 Executing: test_invalid_db_write_overflow_cq 00:31:17.061 Waiting for AER completion... 00:31:17.061 Failure: test_invalid_db_write_overflow_cq 00:31:17.061 00:31:17.061 23:12:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:17.061 23:12:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:17.061 [2024-12-09 23:12:57.660408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.023 Executing: test_write_invalid_db 00:31:27.023 Waiting for AER completion... 00:31:27.023 Failure: test_write_invalid_db 00:31:27.023 00:31:27.023 Executing: test_invalid_db_write_overflow_sq 00:31:27.023 Waiting for AER completion... 00:31:27.023 Failure: test_invalid_db_write_overflow_sq 00:31:27.023 00:31:27.023 Executing: test_invalid_db_write_overflow_cq 00:31:27.023 Waiting for AER completion... 00:31:27.023 Failure: test_invalid_db_write_overflow_cq 00:31:27.023 00:31:27.023 00:31:27.023 real 0m40.181s 00:31:27.023 user 0m34.121s 00:31:27.023 sys 0m5.660s 00:31:27.023 23:13:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.023 23:13:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:31:27.023 ************************************ 00:31:27.023 END TEST nvme_doorbell_aers 00:31:27.023 ************************************ 00:31:27.023 23:13:07 nvme -- nvme/nvme.sh@97 -- # uname 00:31:27.023 23:13:07 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:27.023 23:13:07 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:27.023 23:13:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:27.023 23:13:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.023 23:13:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:27.023 ************************************ 00:31:27.023 START TEST nvme_multi_aen 00:31:27.023 ************************************ 00:31:27.023 23:13:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:27.279 [2024-12-09 23:13:07.709920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.709995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.710005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.711490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.711530] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.711538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.712605] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.712632] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.712640] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.279 [2024-12-09 23:13:07.713633] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.280 [2024-12-09 23:13:07.713660] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.280 [2024-12-09 23:13:07.713668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63242) is not found. Dropping the request. 00:31:27.280 Child process pid: 63768 00:31:27.537 [Child] Asynchronous Event Request test 00:31:27.537 [Child] Attached to 0000:00:10.0 00:31:27.537 [Child] Attached to 0000:00:11.0 00:31:27.537 [Child] Attached to 0000:00:13.0 00:31:27.537 [Child] Attached to 0000:00:12.0 00:31:27.537 [Child] Registering asynchronous event callbacks... 00:31:27.537 [Child] Getting orig temperature thresholds of all controllers 00:31:27.537 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:27.537 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 [Child] Cleaning up... 00:31:27.537 Asynchronous Event Request test 00:31:27.537 Attached to 0000:00:10.0 00:31:27.537 Attached to 0000:00:11.0 00:31:27.537 Attached to 0000:00:13.0 00:31:27.537 Attached to 0000:00:12.0 00:31:27.537 Reset controller to setup AER completions for this process 00:31:27.537 Registering asynchronous event callbacks... 00:31:27.537 Getting orig temperature thresholds of all controllers 00:31:27.537 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:27.537 Setting all controllers temperature threshold low to trigger AER 00:31:27.537 Waiting for all controllers temperature threshold to be set lower 00:31:27.537 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:27.537 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:27.537 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:27.537 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:27.537 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:27.537 Waiting for all controllers to trigger AER and reset threshold 00:31:27.537 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:27.537 Cleaning up... 00:31:27.537 00:31:27.537 real 0m0.445s 00:31:27.537 user 0m0.147s 00:31:27.537 sys 0m0.193s 00:31:27.537 23:13:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.537 23:13:07 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:31:27.537 ************************************ 00:31:27.537 END TEST nvme_multi_aen 00:31:27.537 ************************************ 00:31:27.537 23:13:07 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:27.537 23:13:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.537 23:13:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.537 23:13:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:27.537 ************************************ 00:31:27.537 START TEST nvme_startup 00:31:27.537 ************************************ 00:31:27.537 23:13:08 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:27.795 Initializing NVMe Controllers 00:31:27.795 Attached to 0000:00:10.0 00:31:27.795 Attached to 0000:00:11.0 00:31:27.795 Attached to 0000:00:13.0 00:31:27.795 Attached to 0000:00:12.0 00:31:27.795 Initialization complete. 00:31:27.795 Time used:137153.438 (us). 00:31:27.795 00:31:27.795 real 0m0.190s 00:31:27.795 user 0m0.062s 00:31:27.795 sys 0m0.082s 00:31:27.795 23:13:08 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.795 23:13:08 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:31:27.795 ************************************ 00:31:27.795 END TEST nvme_startup 00:31:27.795 ************************************ 00:31:27.795 23:13:08 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:27.795 23:13:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:27.795 23:13:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.795 23:13:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:27.795 ************************************ 00:31:27.795 START TEST nvme_multi_secondary 00:31:27.795 ************************************ 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63819 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63820 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:27.795 23:13:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:31.077 Initializing NVMe Controllers 00:31:31.077 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:31.077 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:31.077 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:31.077 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:31.077 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:31.077 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:31.077 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:31.077 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:31.077 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:31.077 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:31.077 Initialization complete. Launching workers. 00:31:31.077 ======================================================== 00:31:31.077 Latency(us) 00:31:31.077 Device Information : IOPS MiB/s Average min max 00:31:31.077 PCIE (0000:00:10.0) NSID 1 from core 2: 3145.50 12.29 5084.44 1349.56 14477.27 00:31:31.077 PCIE (0000:00:11.0) NSID 1 from core 2: 3145.50 12.29 5086.53 1352.85 13255.82 00:31:31.077 PCIE (0000:00:13.0) NSID 1 from core 2: 3145.50 12.29 5086.46 1350.50 16812.43 00:31:31.077 PCIE (0000:00:12.0) NSID 1 from core 2: 3145.50 12.29 5086.64 1332.33 13436.09 00:31:31.077 PCIE (0000:00:12.0) NSID 2 from core 2: 3145.50 12.29 5086.59 1279.71 14133.43 00:31:31.077 PCIE (0000:00:12.0) NSID 3 from core 2: 3145.50 12.29 5086.59 1082.51 18534.87 00:31:31.077 ======================================================== 00:31:31.077 Total : 18873.00 73.72 5086.21 1082.51 18534.87 00:31:31.077 00:31:31.077 23:13:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63819 00:31:31.077 Initializing NVMe Controllers 00:31:31.077 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:31.077 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:31.078 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:31.078 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:31.078 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:31.078 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:31.078 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:31.078 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:31.078 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:31.078 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:31.078 Initialization complete. Launching workers. 00:31:31.078 ======================================================== 00:31:31.078 Latency(us) 00:31:31.078 Device Information : IOPS MiB/s Average min max 00:31:31.078 PCIE (0000:00:10.0) NSID 1 from core 1: 7716.40 30.14 2072.10 878.72 5456.02 00:31:31.078 PCIE (0000:00:11.0) NSID 1 from core 1: 7716.40 30.14 2073.09 934.98 5752.00 00:31:31.078 PCIE (0000:00:13.0) NSID 1 from core 1: 7716.40 30.14 2073.05 960.63 5677.56 00:31:31.078 PCIE (0000:00:12.0) NSID 1 from core 1: 7716.40 30.14 2073.02 932.94 6108.74 00:31:31.078 PCIE (0000:00:12.0) NSID 2 from core 1: 7716.40 30.14 2073.05 1003.90 6056.42 00:31:31.078 PCIE (0000:00:12.0) NSID 3 from core 1: 7716.40 30.14 2073.00 889.89 5513.53 00:31:31.078 ======================================================== 00:31:31.078 Total : 46298.38 180.85 2072.89 878.72 6108.74 00:31:31.078 00:31:32.982 Initializing NVMe Controllers 00:31:32.982 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:32.982 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:32.982 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:32.982 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:32.982 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:32.982 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:32.982 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:32.982 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:32.982 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:32.982 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:32.982 Initialization complete. Launching workers. 00:31:32.982 ======================================================== 00:31:32.982 Latency(us) 00:31:32.982 Device Information : IOPS MiB/s Average min max 00:31:32.982 PCIE (0000:00:10.0) NSID 1 from core 0: 10833.59 42.32 1475.63 559.26 6082.54 00:31:32.982 PCIE (0000:00:11.0) NSID 1 from core 0: 10833.59 42.32 1476.50 586.76 6033.42 00:31:32.982 PCIE (0000:00:13.0) NSID 1 from core 0: 10607.02 41.43 1508.02 647.16 68084.98 00:31:32.982 PCIE (0000:00:12.0) NSID 1 from core 0: 10833.59 42.32 1476.47 577.69 5892.20 00:31:32.982 PCIE (0000:00:12.0) NSID 2 from core 0: 10833.59 42.32 1476.46 578.99 5709.17 00:31:32.982 PCIE (0000:00:12.0) NSID 3 from core 0: 10833.59 42.32 1476.45 572.81 6105.46 00:31:32.982 ======================================================== 00:31:32.982 Total : 64774.96 253.03 1481.50 559.26 68084.98 00:31:32.982 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63820 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63889 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63890 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:32.982 23:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:36.260 Initializing NVMe Controllers 00:31:36.260 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:36.260 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:36.260 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:36.260 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:36.260 Initialization complete. Launching workers. 00:31:36.260 ======================================================== 00:31:36.260 Latency(us) 00:31:36.260 Device Information : IOPS MiB/s Average min max 00:31:36.260 PCIE (0000:00:10.0) NSID 1 from core 1: 7734.23 30.21 2067.34 798.24 5428.57 00:31:36.260 PCIE (0000:00:11.0) NSID 1 from core 1: 7734.23 30.21 2068.32 805.65 5274.57 00:31:36.260 PCIE (0000:00:13.0) NSID 1 from core 1: 7734.23 30.21 2068.32 796.77 5599.28 00:31:36.260 PCIE (0000:00:12.0) NSID 1 from core 1: 7734.23 30.21 2068.29 809.76 5772.41 00:31:36.260 PCIE (0000:00:12.0) NSID 2 from core 1: 7734.23 30.21 2068.34 824.27 5717.50 00:31:36.260 PCIE (0000:00:12.0) NSID 3 from core 1: 7734.23 30.21 2068.39 837.45 5268.32 00:31:36.260 ======================================================== 00:31:36.260 Total : 46405.38 181.27 2068.17 796.77 5772.41 00:31:36.260 00:31:36.260 Initializing NVMe Controllers 00:31:36.260 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:36.260 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:36.260 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:36.260 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:36.260 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:36.260 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:36.260 Initialization complete. Launching workers. 00:31:36.260 ======================================================== 00:31:36.260 Latency(us) 00:31:36.260 Device Information : IOPS MiB/s Average min max 00:31:36.260 PCIE (0000:00:10.0) NSID 1 from core 0: 7710.69 30.12 2073.64 756.80 6196.38 00:31:36.260 PCIE (0000:00:11.0) NSID 1 from core 0: 7710.69 30.12 2074.58 769.64 6545.32 00:31:36.260 PCIE (0000:00:13.0) NSID 1 from core 0: 7710.69 30.12 2074.51 723.16 6355.27 00:31:36.260 PCIE (0000:00:12.0) NSID 1 from core 0: 7710.69 30.12 2074.47 699.76 6039.50 00:31:36.260 PCIE (0000:00:12.0) NSID 2 from core 0: 7710.69 30.12 2074.43 685.46 6196.60 00:31:36.260 PCIE (0000:00:12.0) NSID 3 from core 0: 7710.69 30.12 2074.39 655.71 6517.56 00:31:36.260 ======================================================== 00:31:36.260 Total : 46264.15 180.72 2074.34 655.71 6545.32 00:31:36.260 00:31:38.782 Initializing NVMe Controllers 00:31:38.782 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:38.782 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:38.782 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:38.782 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:38.782 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:38.782 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:38.782 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:38.782 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:38.782 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:38.782 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:38.782 Initialization complete. Launching workers. 00:31:38.782 ======================================================== 00:31:38.782 Latency(us) 00:31:38.782 Device Information : IOPS MiB/s Average min max 00:31:38.782 PCIE (0000:00:10.0) NSID 1 from core 2: 4632.35 18.10 3451.96 726.68 16267.34 00:31:38.782 PCIE (0000:00:11.0) NSID 1 from core 2: 4632.35 18.10 3453.27 738.17 12923.15 00:31:38.782 PCIE (0000:00:13.0) NSID 1 from core 2: 4632.35 18.10 3453.21 751.25 12148.39 00:31:38.782 PCIE (0000:00:12.0) NSID 1 from core 2: 4632.35 18.10 3453.14 748.11 12591.59 00:31:38.782 PCIE (0000:00:12.0) NSID 2 from core 2: 4632.35 18.10 3453.07 727.31 16482.71 00:31:38.782 PCIE (0000:00:12.0) NSID 3 from core 2: 4632.35 18.10 3453.02 708.31 12478.31 00:31:38.782 ======================================================== 00:31:38.782 Total : 27794.12 108.57 3452.95 708.31 16482.71 00:31:38.782 00:31:38.782 23:13:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63889 00:31:38.782 23:13:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63890 00:31:38.782 00:31:38.782 real 0m10.710s 00:31:38.782 user 0m18.416s 00:31:38.782 sys 0m0.628s 00:31:38.782 23:13:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.782 23:13:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:31:38.782 ************************************ 00:31:38.782 END TEST nvme_multi_secondary 00:31:38.782 ************************************ 00:31:38.782 23:13:18 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:38.782 23:13:18 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:31:38.782 23:13:18 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62845 ]] 00:31:38.782 23:13:18 nvme -- common/autotest_common.sh@1094 -- # kill 62845 00:31:38.782 23:13:18 nvme -- common/autotest_common.sh@1095 -- # wait 62845 00:31:38.782 [2024-12-09 23:13:18.982011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.782 [2024-12-09 23:13:18.982107] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.982506] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.982563] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.985498] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.985540] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.985552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.985564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.987293] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.987340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.987351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.987361] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.989021] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.989063] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.989073] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:38.783 [2024-12-09 23:13:18.989083] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63766) is not found. Dropping the request. 00:31:39.713 [2024-12-09 23:13:20.258203] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:31:39.972 23:13:20 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:31:39.972 23:13:20 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:31:39.972 23:13:20 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:39.972 23:13:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:39.972 23:13:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.972 23:13:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:39.972 ************************************ 00:31:39.972 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:39.972 ************************************ 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:39.972 * Looking for test storage... 00:31:39.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:31:39.972 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.973 --rc genhtml_branch_coverage=1 00:31:39.973 --rc genhtml_function_coverage=1 00:31:39.973 --rc genhtml_legend=1 00:31:39.973 --rc geninfo_all_blocks=1 00:31:39.973 --rc geninfo_unexecuted_blocks=1 00:31:39.973 00:31:39.973 ' 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.973 --rc genhtml_branch_coverage=1 00:31:39.973 --rc genhtml_function_coverage=1 00:31:39.973 --rc genhtml_legend=1 00:31:39.973 --rc geninfo_all_blocks=1 00:31:39.973 --rc geninfo_unexecuted_blocks=1 00:31:39.973 00:31:39.973 ' 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.973 --rc genhtml_branch_coverage=1 00:31:39.973 --rc genhtml_function_coverage=1 00:31:39.973 --rc genhtml_legend=1 00:31:39.973 --rc geninfo_all_blocks=1 00:31:39.973 --rc geninfo_unexecuted_blocks=1 00:31:39.973 00:31:39.973 ' 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.973 --rc genhtml_branch_coverage=1 00:31:39.973 --rc genhtml_function_coverage=1 00:31:39.973 --rc genhtml_legend=1 00:31:39.973 --rc geninfo_all_blocks=1 00:31:39.973 --rc geninfo_unexecuted_blocks=1 00:31:39.973 00:31:39.973 ' 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:39.973 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64058 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64058 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64058 ']' 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:40.232 23:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:40.232 [2024-12-09 23:13:20.702972] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:40.232 [2024-12-09 23:13:20.703101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64058 ] 00:31:40.490 [2024-12-09 23:13:20.873516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.490 [2024-12-09 23:13:20.982419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.490 [2024-12-09 23:13:20.982756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.490 [2024-12-09 23:13:20.983113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.490 [2024-12-09 23:13:20.983253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.056 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.056 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:31:41.056 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:41.057 nvme0n1 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_682tw.txt 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:41.057 true 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733786001 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64081 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:41.057 23:13:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:43.592 [2024-12-09 23:13:23.686919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:31:43.592 [2024-12-09 23:13:23.687534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:43.592 [2024-12-09 23:13:23.687596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:43.592 [2024-12-09 23:13:23.687615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.592 [2024-12-09 23:13:23.689247] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:31:43.592 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64081 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64081 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64081 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_682tw.txt 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_682tw.txt 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64058 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64058 ']' 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64058 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.592 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64058 00:31:43.593 killing process with pid 64058 00:31:43.593 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.593 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.593 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64058' 00:31:43.593 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64058 00:31:43.593 23:13:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64058 00:31:44.992 23:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:44.993 23:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:44.993 00:31:44.993 real 0m4.941s 00:31:44.993 user 0m17.458s 00:31:44.993 sys 0m0.511s 00:31:44.993 23:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.993 ************************************ 00:31:44.993 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:44.993 ************************************ 00:31:44.993 23:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:44.993 23:13:25 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:44.993 23:13:25 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:44.993 23:13:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:44.993 23:13:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.993 23:13:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:44.993 ************************************ 00:31:44.993 START TEST nvme_fio 00:31:44.993 ************************************ 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:44.993 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:44.993 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:45.251 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:45.251 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:45.508 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:45.508 23:13:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:45.508 23:13:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:45.766 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.766 fio-3.35 00:31:45.766 Starting 1 thread 00:31:53.870 00:31:53.870 test: (groupid=0, jobs=1): err= 0: pid=64216: Mon Dec 9 23:13:33 2024 00:31:53.870 read: IOPS=18.1k, BW=70.6MiB/s (74.0MB/s)(141MiB/2001msec) 00:31:53.870 slat (nsec): min=3384, max=98152, avg=5804.76, stdev=3166.35 00:31:53.870 clat (usec): min=576, max=42783, avg=3481.77, stdev=1501.51 00:31:53.870 lat (usec): min=588, max=42821, avg=3487.57, stdev=1502.58 00:31:53.870 clat percentiles (usec): 00:31:53.870 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:31:53.870 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 3032], 60.00th=[ 3294], 00:31:53.870 | 70.00th=[ 3720], 80.00th=[ 4359], 90.00th=[ 5211], 95.00th=[ 5932], 00:31:53.870 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[30802], 99.95th=[32375], 00:31:53.870 | 99.99th=[35390] 00:31:53.870 bw ( KiB/s): min=65432, max=77504, per=99.77%, avg=72096.00, stdev=6133.22, samples=3 00:31:53.870 iops : min=16358, max=19376, avg=18024.00, stdev=1533.31, samples=3 00:31:53.870 write: IOPS=18.1k, BW=70.7MiB/s (74.1MB/s)(141MiB/2001msec); 0 zone resets 00:31:53.870 slat (usec): min=3, max=132, avg= 5.97, stdev= 3.26 00:31:53.870 clat (usec): min=598, max=46253, avg=3570.77, stdev=2229.53 00:31:53.870 lat (usec): min=611, max=46265, avg=3576.74, stdev=2230.47 00:31:53.870 clat percentiles (usec): 00:31:53.870 | 1.00th=[ 1909], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:31:53.870 | 30.00th=[ 2704], 40.00th=[ 2868], 50.00th=[ 3064], 60.00th=[ 3326], 00:31:53.870 | 70.00th=[ 3752], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5997], 00:31:53.870 | 99.00th=[ 7373], 99.50th=[ 8029], 99.90th=[44303], 99.95th=[44827], 00:31:53.870 | 99.99th=[45876] 00:31:53.870 bw ( KiB/s): min=65920, max=77472, per=99.51%, avg=72034.67, stdev=5805.71, samples=3 00:31:53.870 iops : min=16480, max=19368, avg=18008.67, stdev=1451.43, samples=3 00:31:53.870 lat (usec) : 750=0.01%, 1000=0.03% 00:31:53.870 lat (msec) : 2=1.21%, 4=73.23%, 10=25.34%, 50=0.18% 00:31:53.871 cpu : usr=98.75%, sys=0.15%, ctx=5, majf=0, minf=607 00:31:53.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:53.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.871 issued rwts: total=36150,36211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.871 00:31:53.871 Run status group 0 (all jobs): 00:31:53.871 READ: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=141MiB (148MB), run=2001-2001msec 00:31:53.871 WRITE: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=141MiB (148MB), run=2001-2001msec 00:31:53.871 ----------------------------------------------------- 00:31:53.871 Suppressions used: 00:31:53.871 count bytes template 00:31:53.871 1 32 /usr/src/fio/parse.c 00:31:53.871 1 8 libtcmalloc_minimal.so 00:31:53.871 ----------------------------------------------------- 00:31:53.871 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:53.871 23:13:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:53.871 23:13:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:53.871 23:13:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:53.871 23:13:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:53.871 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:53.871 fio-3.35 00:31:53.871 Starting 1 thread 00:32:11.957 00:32:11.957 test: (groupid=0, jobs=1): err= 0: pid=64288: Mon Dec 9 23:13:51 2024 00:32:11.957 read: IOPS=17.7k, BW=69.0MiB/s (72.4MB/s)(138MiB/2001msec) 00:32:11.957 slat (usec): min=4, max=643, avg= 5.95, stdev= 4.86 00:32:11.957 clat (usec): min=243, max=10562, avg=3589.26, stdev=1312.93 00:32:11.957 lat (usec): min=247, max=10592, avg=3595.20, stdev=1314.55 00:32:11.957 clat percentiles (usec): 00:32:11.957 | 1.00th=[ 1926], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:32:11.957 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 3032], 60.00th=[ 3359], 00:32:11.957 | 70.00th=[ 3916], 80.00th=[ 4686], 90.00th=[ 5604], 95.00th=[ 6325], 00:32:11.957 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8455], 99.95th=[ 8848], 00:32:11.957 | 99.99th=[10159] 00:32:11.957 bw ( KiB/s): min=63680, max=77328, per=98.19%, avg=69429.33, stdev=7073.31, samples=3 00:32:11.957 iops : min=15920, max=19332, avg=17357.33, stdev=1768.33, samples=3 00:32:11.957 write: IOPS=17.7k, BW=69.0MiB/s (72.4MB/s)(138MiB/2001msec); 0 zone resets 00:32:11.957 slat (nsec): min=4291, max=72622, avg=6146.23, stdev=3607.29 00:32:11.957 clat (usec): min=286, max=10373, avg=3622.79, stdev=1323.27 00:32:11.957 lat (usec): min=291, max=10379, avg=3628.94, stdev=1324.93 00:32:11.957 clat percentiles (usec): 00:32:11.957 | 1.00th=[ 1991], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:32:11.957 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 3064], 60.00th=[ 3392], 00:32:11.957 | 70.00th=[ 3949], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6456], 00:32:11.957 | 99.00th=[ 7635], 99.50th=[ 8029], 99.90th=[ 8586], 99.95th=[ 8848], 00:32:11.957 | 99.99th=[10028] 00:32:11.957 bw ( KiB/s): min=63944, max=77256, per=98.12%, avg=69373.33, stdev=6986.88, samples=3 00:32:11.957 iops : min=15986, max=19314, avg=17343.33, stdev=1746.72, samples=3 00:32:11.957 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:32:11.957 lat (msec) : 2=1.10%, 4=69.63%, 10=29.22%, 20=0.01% 00:32:11.957 cpu : usr=98.65%, sys=0.05%, ctx=6, majf=0, minf=607 00:32:11.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:11.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.957 issued rwts: total=35371,35369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.957 00:32:11.957 Run status group 0 (all jobs): 00:32:11.957 READ: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:32:11.957 WRITE: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:32:11.957 ----------------------------------------------------- 00:32:11.957 Suppressions used: 00:32:11.957 count bytes template 00:32:11.957 1 32 /usr/src/fio/parse.c 00:32:11.957 1 8 libtcmalloc_minimal.so 00:32:11.957 ----------------------------------------------------- 00:32:11.957 00:32:11.957 23:13:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:11.957 23:13:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:11.957 23:13:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:11.957 23:13:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:11.957 23:13:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:11.957 23:13:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:11.957 23:13:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:11.957 23:13:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:11.957 23:13:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:11.957 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:11.957 fio-3.35 00:32:11.957 Starting 1 thread 00:32:17.233 00:32:17.233 test: (groupid=0, jobs=1): err= 0: pid=64339: Mon Dec 9 23:13:57 2024 00:32:17.233 read: IOPS=16.9k, BW=66.0MiB/s (69.2MB/s)(133MiB/2013msec) 00:32:17.233 slat (nsec): min=3387, max=82993, avg=5212.23, stdev=2446.66 00:32:17.233 clat (usec): min=948, max=14871, avg=2952.48, stdev=1101.81 00:32:17.233 lat (usec): min=952, max=14875, avg=2957.69, stdev=1102.55 00:32:17.233 clat percentiles (usec): 00:32:17.233 | 1.00th=[ 1401], 5.00th=[ 2057], 10.00th=[ 2376], 20.00th=[ 2474], 00:32:17.233 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:32:17.233 | 70.00th=[ 2737], 80.00th=[ 3261], 90.00th=[ 4359], 95.00th=[ 5407], 00:32:17.233 | 99.00th=[ 6718], 99.50th=[ 7373], 99.90th=[13435], 99.95th=[14091], 00:32:17.233 | 99.99th=[14746] 00:32:17.234 bw ( KiB/s): min=31944, max=95920, per=100.00%, avg=67918.00, stdev=31443.09, samples=4 00:32:17.234 iops : min= 7986, max=23980, avg=16979.50, stdev=7860.77, samples=4 00:32:17.234 write: IOPS=16.9k, BW=66.1MiB/s (69.3MB/s)(133MiB/2013msec); 0 zone resets 00:32:17.234 slat (usec): min=3, max=307, avg= 5.56, stdev= 3.54 00:32:17.234 clat (usec): min=1005, max=40054, avg=4592.93, stdev=5422.60 00:32:17.234 lat (usec): min=1010, max=40058, avg=4598.49, stdev=5423.10 00:32:17.234 clat percentiles (usec): 00:32:17.234 | 1.00th=[ 1631], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2474], 00:32:17.234 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:32:17.234 | 70.00th=[ 2900], 80.00th=[ 4080], 90.00th=[ 8586], 95.00th=[20055], 00:32:17.234 | 99.00th=[27132], 99.50th=[29492], 99.90th=[32637], 99.95th=[33817], 00:32:17.234 | 99.99th=[39584] 00:32:17.234 bw ( KiB/s): min=32544, max=95080, per=100.00%, avg=67956.00, stdev=30802.05, samples=4 00:32:17.234 iops : min= 8136, max=23770, avg=16989.00, stdev=7700.51, samples=4 00:32:17.234 lat (usec) : 1000=0.01% 00:32:17.234 lat (msec) : 2=3.75%, 4=79.69%, 10=11.64%, 20=2.44%, 50=2.48% 00:32:17.234 cpu : usr=98.76%, sys=0.30%, ctx=3, majf=0, minf=607 00:32:17.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:17.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.234 issued rwts: total=33990,34074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.234 00:32:17.234 Run status group 0 (all jobs): 00:32:17.234 READ: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=133MiB (139MB), run=2013-2013msec 00:32:17.234 WRITE: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=133MiB (140MB), run=2013-2013msec 00:32:17.234 ----------------------------------------------------- 00:32:17.234 Suppressions used: 00:32:17.234 count bytes template 00:32:17.234 1 32 /usr/src/fio/parse.c 00:32:17.234 1 8 libtcmalloc_minimal.so 00:32:17.234 ----------------------------------------------------- 00:32:17.234 00:32:17.234 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:17.234 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:17.234 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:17.234 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:17.490 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:17.490 23:13:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:17.490 23:13:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:17.490 23:13:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:17.490 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:17.748 23:13:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:17.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:17.748 fio-3.35 00:32:17.748 Starting 1 thread 00:32:27.785 00:32:27.785 test: (groupid=0, jobs=1): err= 0: pid=64400: Mon Dec 9 23:14:07 2024 00:32:27.785 read: IOPS=23.2k, BW=90.8MiB/s (95.2MB/s)(182MiB/2001msec) 00:32:27.785 slat (nsec): min=4197, max=44255, avg=4939.16, stdev=1923.81 00:32:27.785 clat (usec): min=200, max=7682, avg=2751.97, stdev=698.51 00:32:27.785 lat (usec): min=205, max=7722, avg=2756.90, stdev=699.60 00:32:27.785 clat percentiles (usec): 00:32:27.785 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2474], 00:32:27.785 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:32:27.785 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 3163], 95.00th=[ 4359], 00:32:27.785 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 6718], 99.95th=[ 6980], 00:32:27.785 | 99.99th=[ 7504] 00:32:27.785 bw ( KiB/s): min=92552, max=94008, per=100.00%, avg=93221.33, stdev=735.06, samples=3 00:32:27.785 iops : min=23138, max=23502, avg=23305.33, stdev=183.76, samples=3 00:32:27.785 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(181MiB/2001msec); 0 zone resets 00:32:27.785 slat (nsec): min=4258, max=89268, avg=5238.34, stdev=1993.13 00:32:27.785 clat (usec): min=208, max=7558, avg=2752.42, stdev=688.23 00:32:27.785 lat (usec): min=213, max=7567, avg=2757.65, stdev=689.30 00:32:27.785 clat percentiles (usec): 00:32:27.785 | 1.00th=[ 1844], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2474], 00:32:27.785 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:32:27.785 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 3130], 95.00th=[ 4293], 00:32:27.785 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[ 6980], 00:32:27.785 | 99.99th=[ 7308] 00:32:27.785 bw ( KiB/s): min=92008, max=94144, per=100.00%, avg=93320.00, stdev=1148.58, samples=3 00:32:27.785 iops : min=23002, max=23536, avg=23330.00, stdev=287.14, samples=3 00:32:27.785 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:27.785 lat (msec) : 2=1.87%, 4=92.21%, 10=5.88% 00:32:27.785 cpu : usr=99.20%, sys=0.05%, ctx=3, majf=0, minf=605 00:32:27.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:27.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.785 issued rwts: total=46513,46226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.785 00:32:27.785 Run status group 0 (all jobs): 00:32:27.785 READ: bw=90.8MiB/s (95.2MB/s), 90.8MiB/s-90.8MiB/s (95.2MB/s-95.2MB/s), io=182MiB (191MB), run=2001-2001msec 00:32:27.785 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:32:27.785 ----------------------------------------------------- 00:32:27.785 Suppressions used: 00:32:27.785 count bytes template 00:32:27.785 1 32 /usr/src/fio/parse.c 00:32:27.785 1 8 libtcmalloc_minimal.so 00:32:27.785 ----------------------------------------------------- 00:32:27.785 00:32:27.785 23:14:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:27.785 23:14:07 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:32:27.785 00:32:27.785 real 0m42.454s 00:32:27.785 user 0m23.178s 00:32:27.785 sys 0m35.598s 00:32:27.785 23:14:07 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.785 23:14:07 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:32:27.785 ************************************ 00:32:27.785 END TEST nvme_fio 00:32:27.785 ************************************ 00:32:27.785 ************************************ 00:32:27.785 END TEST nvme 00:32:27.785 ************************************ 00:32:27.785 00:32:27.785 real 1m53.025s 00:32:27.785 user 3m45.612s 00:32:27.785 sys 0m46.040s 00:32:27.785 23:14:07 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.785 23:14:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:27.785 23:14:07 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:32:27.785 23:14:07 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:27.785 23:14:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.785 23:14:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.785 23:14:07 -- common/autotest_common.sh@10 -- # set +x 00:32:27.785 ************************************ 00:32:27.785 START TEST nvme_scc 00:32:27.785 ************************************ 00:32:27.785 23:14:07 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:27.785 * Looking for test storage... 00:32:27.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.785 23:14:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.785 23:14:08 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.785 --rc genhtml_branch_coverage=1 00:32:27.785 --rc genhtml_function_coverage=1 00:32:27.785 --rc genhtml_legend=1 00:32:27.786 --rc geninfo_all_blocks=1 00:32:27.786 --rc geninfo_unexecuted_blocks=1 00:32:27.786 00:32:27.786 ' 00:32:27.786 23:14:08 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.786 --rc genhtml_branch_coverage=1 00:32:27.786 --rc genhtml_function_coverage=1 00:32:27.786 --rc genhtml_legend=1 00:32:27.786 --rc geninfo_all_blocks=1 00:32:27.786 --rc geninfo_unexecuted_blocks=1 00:32:27.786 00:32:27.786 ' 00:32:27.786 23:14:08 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.786 --rc genhtml_branch_coverage=1 00:32:27.786 --rc genhtml_function_coverage=1 00:32:27.786 --rc genhtml_legend=1 00:32:27.786 --rc geninfo_all_blocks=1 00:32:27.786 --rc geninfo_unexecuted_blocks=1 00:32:27.786 00:32:27.786 ' 00:32:27.786 23:14:08 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.786 --rc genhtml_branch_coverage=1 00:32:27.786 --rc genhtml_function_coverage=1 00:32:27.786 --rc genhtml_legend=1 00:32:27.786 --rc geninfo_all_blocks=1 00:32:27.786 --rc geninfo_unexecuted_blocks=1 00:32:27.786 00:32:27.786 ' 00:32:27.786 23:14:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:27.786 23:14:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.786 23:14:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.786 23:14:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.786 23:14:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.786 23:14:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.786 23:14:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.786 23:14:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.786 23:14:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:32:27.786 23:14:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:27.786 23:14:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:32:27.786 23:14:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:27.786 23:14:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:32:27.786 23:14:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:27.786 23:14:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:27.786 23:14:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:28.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:28.048 Waiting for block devices as requested 00:32:28.048 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.048 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.306 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.306 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.578 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:33.578 23:14:13 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:33.578 23:14:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.578 23:14:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:33.578 23:14:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.578 23:14:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:33.578 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.579 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.580 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.581 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.582 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:33.583 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:33.584 23:14:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.584 23:14:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:33.584 23:14:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.584 23:14:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:33.584 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:33.585 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:33.586 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.587 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.588 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.589 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.590 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:33.591 23:14:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.591 23:14:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:33.591 23:14:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.591 23:14:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:33.591 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.592 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.593 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.594 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.595 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.596 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:32:33.597 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.598 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.599 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:33.600 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.601 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:33.862 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.863 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.864 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:33.865 23:14:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.865 23:14:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:33.865 23:14:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.865 23:14:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:33.865 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.866 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:33.867 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:33.868 23:14:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:33.868 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:32:33.869 23:14:14 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:32:33.869 23:14:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:32:33.869 23:14:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:32:33.869 23:14:14 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:34.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:34.703 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:34.703 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:34.703 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:34.703 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:34.703 23:14:15 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:34.703 23:14:15 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:34.703 23:14:15 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.703 23:14:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:34.703 ************************************ 00:32:34.703 START TEST nvme_simple_copy 00:32:34.703 ************************************ 00:32:34.703 23:14:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:34.961 Initializing NVMe Controllers 00:32:34.961 Attaching to 0000:00:10.0 00:32:34.961 Controller supports SCC. Attached to 0000:00:10.0 00:32:34.961 Namespace ID: 1 size: 6GB 00:32:34.961 Initialization complete. 00:32:34.961 00:32:34.961 Controller QEMU NVMe Ctrl (12340 ) 00:32:34.961 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:34.961 Namespace Block Size:4096 00:32:34.961 Writing LBAs 0 to 63 with Random Data 00:32:34.961 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:34.961 LBAs matching Written Data: 64 00:32:35.219 00:32:35.219 real 0m0.271s 00:32:35.219 user 0m0.100s 00:32:35.219 sys 0m0.068s 00:32:35.219 23:14:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.219 ************************************ 00:32:35.219 END TEST nvme_simple_copy 00:32:35.219 ************************************ 00:32:35.219 23:14:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:32:35.219 00:32:35.219 real 0m7.684s 00:32:35.219 user 0m1.121s 00:32:35.219 sys 0m1.378s 00:32:35.219 23:14:15 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.219 23:14:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:35.219 ************************************ 00:32:35.219 END TEST nvme_scc 00:32:35.219 ************************************ 00:32:35.219 23:14:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:35.219 23:14:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:32:35.219 23:14:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:32:35.219 23:14:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:32:35.219 23:14:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:32:35.219 23:14:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:35.219 23:14:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.219 23:14:15 -- common/autotest_common.sh@10 -- # set +x 00:32:35.219 ************************************ 00:32:35.219 START TEST nvme_fdp 00:32:35.219 ************************************ 00:32:35.219 23:14:15 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:32:35.219 * Looking for test storage... 00:32:35.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:35.219 23:14:15 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:35.219 23:14:15 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:35.219 23:14:15 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.477 --rc genhtml_branch_coverage=1 00:32:35.477 --rc genhtml_function_coverage=1 00:32:35.477 --rc genhtml_legend=1 00:32:35.477 --rc geninfo_all_blocks=1 00:32:35.477 --rc geninfo_unexecuted_blocks=1 00:32:35.477 00:32:35.477 ' 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.477 --rc genhtml_branch_coverage=1 00:32:35.477 --rc genhtml_function_coverage=1 00:32:35.477 --rc genhtml_legend=1 00:32:35.477 --rc geninfo_all_blocks=1 00:32:35.477 --rc geninfo_unexecuted_blocks=1 00:32:35.477 00:32:35.477 ' 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.477 --rc genhtml_branch_coverage=1 00:32:35.477 --rc genhtml_function_coverage=1 00:32:35.477 --rc genhtml_legend=1 00:32:35.477 --rc geninfo_all_blocks=1 00:32:35.477 --rc geninfo_unexecuted_blocks=1 00:32:35.477 00:32:35.477 ' 00:32:35.477 23:14:15 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.477 --rc genhtml_branch_coverage=1 00:32:35.477 --rc genhtml_function_coverage=1 00:32:35.477 --rc genhtml_legend=1 00:32:35.477 --rc geninfo_all_blocks=1 00:32:35.477 --rc geninfo_unexecuted_blocks=1 00:32:35.477 00:32:35.477 ' 00:32:35.477 23:14:15 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.477 23:14:15 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.477 23:14:15 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.477 23:14:15 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.477 23:14:15 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.477 23:14:15 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:32:35.477 23:14:15 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:35.477 23:14:15 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:32:35.477 23:14:15 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.477 23:14:15 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:35.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:35.735 Waiting for block devices as requested 00:32:35.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.993 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:41.257 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:41.257 23:14:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:41.257 23:14:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:41.257 23:14:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:41.257 23:14:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:41.257 23:14:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.257 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.258 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.259 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.260 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.261 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:41.262 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:41.263 23:14:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:41.263 23:14:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:41.263 23:14:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:41.263 23:14:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:41.263 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.264 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:41.265 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.266 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:32:41.267 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:41.268 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:41.269 23:14:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:41.269 23:14:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:41.269 23:14:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:41.269 23:14:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.269 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:41.270 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.536 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.537 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.538 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.539 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.540 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.541 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.542 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.543 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.544 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.545 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.546 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:41.547 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:41.548 23:14:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:41.549 23:14:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:41.549 23:14:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:41.549 23:14:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:41.549 23:14:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:41.549 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:41.550 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:41.551 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:32:41.552 23:14:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:32:41.552 23:14:22 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:32:41.552 23:14:22 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:42.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:42.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:42.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:42.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:42.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:42.715 23:14:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:42.715 23:14:23 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:42.715 23:14:23 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.715 23:14:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:42.715 ************************************ 00:32:42.715 START TEST nvme_flexible_data_placement 00:32:42.715 ************************************ 00:32:42.715 23:14:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:42.973 Initializing NVMe Controllers 00:32:42.973 Attaching to 0000:00:13.0 00:32:42.973 Controller supports FDP Attached to 0000:00:13.0 00:32:42.973 Namespace ID: 1 Endurance Group ID: 1 00:32:42.973 Initialization complete. 00:32:42.973 00:32:42.973 ================================== 00:32:42.973 == FDP tests for Namespace: #01 == 00:32:42.973 ================================== 00:32:42.973 00:32:42.973 Get Feature: FDP: 00:32:42.973 ================= 00:32:42.973 Enabled: Yes 00:32:42.973 FDP configuration Index: 0 00:32:42.973 00:32:42.973 FDP configurations log page 00:32:42.973 =========================== 00:32:42.973 Number of FDP configurations: 1 00:32:42.973 Version: 0 00:32:42.973 Size: 112 00:32:42.973 FDP Configuration Descriptor: 0 00:32:42.973 Descriptor Size: 96 00:32:42.973 Reclaim Group Identifier format: 2 00:32:42.973 FDP Volatile Write Cache: Not Present 00:32:42.973 FDP Configuration: Valid 00:32:42.973 Vendor Specific Size: 0 00:32:42.973 Number of Reclaim Groups: 2 00:32:42.973 Number of Recalim Unit Handles: 8 00:32:42.973 Max Placement Identifiers: 128 00:32:42.973 Number of Namespaces Suppprted: 256 00:32:42.973 Reclaim unit Nominal Size: 6000000 bytes 00:32:42.973 Estimated Reclaim Unit Time Limit: Not Reported 00:32:42.973 RUH Desc #000: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #001: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #002: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #003: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #004: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #005: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #006: RUH Type: Initially Isolated 00:32:42.973 RUH Desc #007: RUH Type: Initially Isolated 00:32:42.973 00:32:42.973 FDP reclaim unit handle usage log page 00:32:42.973 ====================================== 00:32:42.973 Number of Reclaim Unit Handles: 8 00:32:42.973 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:42.973 RUH Usage Desc #001: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #002: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #003: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #004: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #005: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #006: RUH Attributes: Unused 00:32:42.973 RUH Usage Desc #007: RUH Attributes: Unused 00:32:42.973 00:32:42.973 FDP statistics log page 00:32:42.973 ======================= 00:32:42.973 Host bytes with metadata written: 1017106432 00:32:42.973 Media bytes with metadata written: 1017356288 00:32:42.973 Media bytes erased: 0 00:32:42.973 00:32:42.973 FDP Reclaim unit handle status 00:32:42.973 ============================== 00:32:42.973 Number of RUHS descriptors: 2 00:32:42.973 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005603 00:32:42.973 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:32:42.973 00:32:42.973 FDP write on placement id: 0 success 00:32:42.973 00:32:42.973 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:32:42.973 00:32:42.973 IO mgmt send: RUH update for Placement ID: #0 Success 00:32:42.973 00:32:42.973 Get Feature: FDP Events for Placement handle: #0 00:32:42.973 ======================== 00:32:42.973 Number of FDP Events: 6 00:32:42.973 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:32:42.973 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:32:42.973 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:32:42.973 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:32:42.973 FDP Event: #4 Type: Media Reallocated Enabled: No 00:32:42.973 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:32:42.973 00:32:42.973 FDP events log page 00:32:42.973 =================== 00:32:42.973 Number of FDP events: 1 00:32:42.973 FDP Event #0: 00:32:42.973 Event Type: RU Not Written to Capacity 00:32:42.973 Placement Identifier: Valid 00:32:42.973 NSID: Valid 00:32:42.973 Location: Valid 00:32:42.973 Placement Identifier: 0 00:32:42.973 Event Timestamp: 6 00:32:42.973 Namespace Identifier: 1 00:32:42.973 Reclaim Group Identifier: 0 00:32:42.973 Reclaim Unit Handle Identifier: 0 00:32:42.973 00:32:42.973 FDP test passed 00:32:42.973 00:32:42.973 real 0m0.239s 00:32:42.973 user 0m0.075s 00:32:42.973 sys 0m0.061s 00:32:42.973 23:14:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.973 ************************************ 00:32:42.973 END TEST nvme_flexible_data_placement 00:32:42.973 ************************************ 00:32:42.973 23:14:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:32:42.973 00:32:42.973 real 0m7.828s 00:32:42.973 user 0m1.144s 00:32:42.973 sys 0m1.397s 00:32:42.973 ************************************ 00:32:42.973 END TEST nvme_fdp 00:32:42.973 23:14:23 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.973 23:14:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:42.973 ************************************ 00:32:42.973 23:14:23 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:32:42.974 23:14:23 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:42.974 23:14:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:42.974 23:14:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.974 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:32:43.232 ************************************ 00:32:43.232 START TEST nvme_rpc 00:32:43.232 ************************************ 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:43.232 * Looking for test storage... 00:32:43.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.232 23:14:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.232 --rc genhtml_branch_coverage=1 00:32:43.232 --rc genhtml_function_coverage=1 00:32:43.232 --rc genhtml_legend=1 00:32:43.232 --rc geninfo_all_blocks=1 00:32:43.232 --rc geninfo_unexecuted_blocks=1 00:32:43.232 00:32:43.232 ' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.232 --rc genhtml_branch_coverage=1 00:32:43.232 --rc genhtml_function_coverage=1 00:32:43.232 --rc genhtml_legend=1 00:32:43.232 --rc geninfo_all_blocks=1 00:32:43.232 --rc geninfo_unexecuted_blocks=1 00:32:43.232 00:32:43.232 ' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.232 --rc genhtml_branch_coverage=1 00:32:43.232 --rc genhtml_function_coverage=1 00:32:43.232 --rc genhtml_legend=1 00:32:43.232 --rc geninfo_all_blocks=1 00:32:43.232 --rc geninfo_unexecuted_blocks=1 00:32:43.232 00:32:43.232 ' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.232 --rc genhtml_branch_coverage=1 00:32:43.232 --rc genhtml_function_coverage=1 00:32:43.232 --rc genhtml_legend=1 00:32:43.232 --rc geninfo_all_blocks=1 00:32:43.232 --rc geninfo_unexecuted_blocks=1 00:32:43.232 00:32:43.232 ' 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65781 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65781 00:32:43.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.232 23:14:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65781 ']' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.232 23:14:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:43.492 [2024-12-09 23:14:23.888354] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:43.492 [2024-12-09 23:14:23.888477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65781 ] 00:32:43.492 [2024-12-09 23:14:24.047435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:43.750 [2024-12-09 23:14:24.151137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.750 [2024-12-09 23:14:24.151283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.319 23:14:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.319 23:14:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:44.319 23:14:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:44.581 Nvme0n1 00:32:44.581 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:44.581 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:44.581 request: 00:32:44.581 { 00:32:44.581 "bdev_name": "Nvme0n1", 00:32:44.581 "filename": "non_existing_file", 00:32:44.581 "method": "bdev_nvme_apply_firmware", 00:32:44.581 "req_id": 1 00:32:44.581 } 00:32:44.581 Got JSON-RPC error response 00:32:44.581 response: 00:32:44.581 { 00:32:44.581 "code": -32603, 00:32:44.581 "message": "open file failed." 00:32:44.581 } 00:32:44.846 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:44.846 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:44.846 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:44.846 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:44.846 23:14:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65781 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65781 ']' 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65781 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65781 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.846 killing process with pid 65781 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65781' 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65781 00:32:44.846 23:14:25 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65781 00:32:46.744 00:32:46.744 real 0m3.290s 00:32:46.744 user 0m6.258s 00:32:46.744 sys 0m0.486s 00:32:46.744 23:14:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.744 ************************************ 00:32:46.744 END TEST nvme_rpc 00:32:46.744 ************************************ 00:32:46.744 23:14:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:46.744 23:14:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:46.744 23:14:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:46.744 23:14:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.744 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:32:46.744 ************************************ 00:32:46.744 START TEST nvme_rpc_timeouts 00:32:46.744 ************************************ 00:32:46.744 23:14:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:46.745 * Looking for test storage... 00:32:46.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.745 23:14:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.745 --rc genhtml_branch_coverage=1 00:32:46.745 --rc genhtml_function_coverage=1 00:32:46.745 --rc genhtml_legend=1 00:32:46.745 --rc geninfo_all_blocks=1 00:32:46.745 --rc geninfo_unexecuted_blocks=1 00:32:46.745 00:32:46.745 ' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.745 --rc genhtml_branch_coverage=1 00:32:46.745 --rc genhtml_function_coverage=1 00:32:46.745 --rc genhtml_legend=1 00:32:46.745 --rc geninfo_all_blocks=1 00:32:46.745 --rc geninfo_unexecuted_blocks=1 00:32:46.745 00:32:46.745 ' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.745 --rc genhtml_branch_coverage=1 00:32:46.745 --rc genhtml_function_coverage=1 00:32:46.745 --rc genhtml_legend=1 00:32:46.745 --rc geninfo_all_blocks=1 00:32:46.745 --rc geninfo_unexecuted_blocks=1 00:32:46.745 00:32:46.745 ' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.745 --rc genhtml_branch_coverage=1 00:32:46.745 --rc genhtml_function_coverage=1 00:32:46.745 --rc genhtml_legend=1 00:32:46.745 --rc geninfo_all_blocks=1 00:32:46.745 --rc geninfo_unexecuted_blocks=1 00:32:46.745 00:32:46.745 ' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65846 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65846 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65884 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65884 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65884 ']' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.745 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:46.745 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:46.745 [2024-12-09 23:14:27.150873] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:46.745 [2024-12-09 23:14:27.151002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65884 ] 00:32:46.745 [2024-12-09 23:14:27.306609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.002 [2024-12-09 23:14:27.409907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.002 [2024-12-09 23:14:27.410009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.572 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.572 Checking default timeout settings: 00:32:47.572 23:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:32:47.572 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:47.572 23:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:47.830 Making settings changes with rpc: 00:32:47.830 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:47.830 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:48.088 Check default vs. modified settings: 00:32:48.088 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:48.088 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:48.346 Setting action_on_timeout is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:48.346 Setting timeout_us is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:48.346 Setting timeout_admin_us is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65846 /tmp/settings_modified_65846 00:32:48.346 23:14:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65884 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65884 ']' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65884 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65884 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.346 killing process with pid 65884 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65884' 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65884 00:32:48.346 23:14:28 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65884 00:32:50.251 RPC TIMEOUT SETTING TEST PASSED. 00:32:50.251 23:14:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:50.251 00:32:50.251 real 0m3.450s 00:32:50.251 user 0m6.745s 00:32:50.251 sys 0m0.480s 00:32:50.251 ************************************ 00:32:50.251 END TEST nvme_rpc_timeouts 00:32:50.251 ************************************ 00:32:50.251 23:14:30 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.251 23:14:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:50.251 23:14:30 -- spdk/autotest.sh@239 -- # uname -s 00:32:50.251 23:14:30 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:32:50.251 23:14:30 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:50.251 23:14:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.251 23:14:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.251 23:14:30 -- common/autotest_common.sh@10 -- # set +x 00:32:50.251 ************************************ 00:32:50.251 START TEST sw_hotplug 00:32:50.251 ************************************ 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:50.251 * Looking for test storage... 00:32:50.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.251 23:14:30 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.251 --rc genhtml_branch_coverage=1 00:32:50.251 --rc genhtml_function_coverage=1 00:32:50.251 --rc genhtml_legend=1 00:32:50.251 --rc geninfo_all_blocks=1 00:32:50.251 --rc geninfo_unexecuted_blocks=1 00:32:50.251 00:32:50.251 ' 00:32:50.251 23:14:30 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.251 --rc genhtml_branch_coverage=1 00:32:50.251 --rc genhtml_function_coverage=1 00:32:50.251 --rc genhtml_legend=1 00:32:50.251 --rc geninfo_all_blocks=1 00:32:50.251 --rc geninfo_unexecuted_blocks=1 00:32:50.252 00:32:50.252 ' 00:32:50.252 23:14:30 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.252 --rc genhtml_branch_coverage=1 00:32:50.252 --rc genhtml_function_coverage=1 00:32:50.252 --rc genhtml_legend=1 00:32:50.252 --rc geninfo_all_blocks=1 00:32:50.252 --rc geninfo_unexecuted_blocks=1 00:32:50.252 00:32:50.252 ' 00:32:50.252 23:14:30 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.252 --rc genhtml_branch_coverage=1 00:32:50.252 --rc genhtml_function_coverage=1 00:32:50.252 --rc genhtml_legend=1 00:32:50.252 --rc geninfo_all_blocks=1 00:32:50.252 --rc geninfo_unexecuted_blocks=1 00:32:50.252 00:32:50.252 ' 00:32:50.252 23:14:30 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:50.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:50.509 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:50.509 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:50.509 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:50.509 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:50.509 23:14:30 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:32:50.509 23:14:30 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:32:50.509 23:14:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:32:50.509 23:14:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@233 -- # local class 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:32:50.509 23:14:30 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:32:50.509 23:14:31 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:50.509 23:14:31 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:32:50.509 23:14:31 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:32:50.509 23:14:31 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:50.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:51.103 Waiting for block devices as requested 00:32:51.103 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.103 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.103 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.103 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:56.371 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:56.371 23:14:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:32:56.371 23:14:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:56.629 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:32:56.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:56.629 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:32:56.886 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:32:57.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.144 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66745 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:32:57.144 23:14:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:32:57.144 23:14:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:32:57.401 Initializing NVMe Controllers 00:32:57.401 Attaching to 0000:00:10.0 00:32:57.401 Attaching to 0000:00:11.0 00:32:57.401 Attached to 0000:00:10.0 00:32:57.401 Attached to 0000:00:11.0 00:32:57.401 Initialization complete. Starting I/O... 00:32:57.401 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:57.401 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:32:57.401 00:32:58.337 QEMU NVMe Ctrl (12340 ): 2437 I/Os completed (+2437) 00:32:58.337 QEMU NVMe Ctrl (12341 ): 2459 I/Os completed (+2459) 00:32:58.337 00:32:59.267 QEMU NVMe Ctrl (12340 ): 5844 I/Os completed (+3407) 00:32:59.267 QEMU NVMe Ctrl (12341 ): 5693 I/Os completed (+3234) 00:32:59.267 00:33:00.639 QEMU NVMe Ctrl (12340 ): 8922 I/Os completed (+3078) 00:33:00.639 QEMU NVMe Ctrl (12341 ): 8761 I/Os completed (+3068) 00:33:00.639 00:33:01.578 QEMU NVMe Ctrl (12340 ): 11937 I/Os completed (+3015) 00:33:01.578 QEMU NVMe Ctrl (12341 ): 11892 I/Os completed (+3131) 00:33:01.578 00:33:02.511 QEMU NVMe Ctrl (12340 ): 15106 I/Os completed (+3169) 00:33:02.511 QEMU NVMe Ctrl (12341 ): 14970 I/Os completed (+3078) 00:33:02.511 00:33:03.079 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:03.079 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:03.079 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:03.079 [2024-12-09 23:14:43.706682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:03.079 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:03.079 [2024-12-09 23:14:43.708099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.708159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.708187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.708213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:03.079 [2024-12-09 23:14:43.710271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.710327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.710349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.079 [2024-12-09 23:14:43.710370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.335 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:03.335 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:03.336 [2024-12-09 23:14:43.728524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:03.336 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:03.336 [2024-12-09 23:14:43.729736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.729788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.729820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.729844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:03.336 [2024-12-09 23:14:43.731574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.731635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.731661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 [2024-12-09 23:14:43.731684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:03.336 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:03.336 EAL: Scan for (pci) bus failed. 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:03.336 Attaching to 0000:00:10.0 00:33:03.336 Attached to 0000:00:10.0 00:33:03.336 QEMU NVMe Ctrl (12340 ): 7 I/Os completed (+7) 00:33:03.336 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:03.336 23:14:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:03.594 Attaching to 0000:00:11.0 00:33:03.594 Attached to 0000:00:11.0 00:33:04.527 QEMU NVMe Ctrl (12340 ): 3254 I/Os completed (+3247) 00:33:04.527 QEMU NVMe Ctrl (12341 ): 3076 I/Os completed (+3076) 00:33:04.527 00:33:05.465 QEMU NVMe Ctrl (12340 ): 6717 I/Os completed (+3463) 00:33:05.465 QEMU NVMe Ctrl (12341 ): 6465 I/Os completed (+3389) 00:33:05.465 00:33:06.402 QEMU NVMe Ctrl (12340 ): 9807 I/Os completed (+3090) 00:33:06.402 QEMU NVMe Ctrl (12341 ): 9758 I/Os completed (+3293) 00:33:06.402 00:33:07.344 QEMU NVMe Ctrl (12340 ): 12951 I/Os completed (+3144) 00:33:07.344 QEMU NVMe Ctrl (12341 ): 13144 I/Os completed (+3386) 00:33:07.344 00:33:08.286 QEMU NVMe Ctrl (12340 ): 16120 I/Os completed (+3169) 00:33:08.286 QEMU NVMe Ctrl (12341 ): 16464 I/Os completed (+3320) 00:33:08.286 00:33:09.670 QEMU NVMe Ctrl (12340 ): 19246 I/Os completed (+3126) 00:33:09.670 QEMU NVMe Ctrl (12341 ): 19633 I/Os completed (+3169) 00:33:09.670 00:33:10.612 QEMU NVMe Ctrl (12340 ): 22263 I/Os completed (+3017) 00:33:10.612 QEMU NVMe Ctrl (12341 ): 22654 I/Os completed (+3021) 00:33:10.612 00:33:11.567 QEMU NVMe Ctrl (12340 ): 25537 I/Os completed (+3274) 00:33:11.567 QEMU NVMe Ctrl (12341 ): 25976 I/Os completed (+3322) 00:33:11.567 00:33:12.504 QEMU NVMe Ctrl (12340 ): 28568 I/Os completed (+3031) 00:33:12.504 QEMU NVMe Ctrl (12341 ): 29001 I/Os completed (+3025) 00:33:12.504 00:33:13.444 QEMU NVMe Ctrl (12340 ): 31591 I/Os completed (+3023) 00:33:13.444 QEMU NVMe Ctrl (12341 ): 32043 I/Os completed (+3042) 00:33:13.444 00:33:14.384 QEMU NVMe Ctrl (12340 ): 34965 I/Os completed (+3374) 00:33:14.384 QEMU NVMe Ctrl (12341 ): 35479 I/Os completed (+3436) 00:33:14.384 00:33:15.333 QEMU NVMe Ctrl (12340 ): 37949 I/Os completed (+2984) 00:33:15.333 QEMU NVMe Ctrl (12341 ): 38590 I/Os completed (+3111) 00:33:15.333 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:15.594 [2024-12-09 23:14:55.976547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:15.594 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:15.594 [2024-12-09 23:14:55.977706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.977759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.977778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.977795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:15.594 [2024-12-09 23:14:55.979766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.979814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.979828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:55.979843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:15.594 23:14:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:15.594 [2024-12-09 23:14:55.999683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:15.594 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:15.594 [2024-12-09 23:14:56.000757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.000804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.000825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.000841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:15.594 [2024-12-09 23:14:56.002558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.002601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.002616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 [2024-12-09 23:14:56.002632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:15.594 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:15.594 EAL: Scan for (pci) bus failed. 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:15.594 Attaching to 0000:00:10.0 00:33:15.594 Attached to 0000:00:10.0 00:33:15.594 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:15.856 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:15.856 23:14:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:15.856 Attaching to 0000:00:11.0 00:33:15.856 Attached to 0000:00:11.0 00:33:16.425 QEMU NVMe Ctrl (12340 ): 2198 I/Os completed (+2198) 00:33:16.425 QEMU NVMe Ctrl (12341 ): 1957 I/Os completed (+1957) 00:33:16.425 00:33:17.362 QEMU NVMe Ctrl (12340 ): 5218 I/Os completed (+3020) 00:33:17.362 QEMU NVMe Ctrl (12341 ): 4954 I/Os completed (+2997) 00:33:17.362 00:33:18.301 QEMU NVMe Ctrl (12340 ): 8248 I/Os completed (+3030) 00:33:18.301 QEMU NVMe Ctrl (12341 ): 8057 I/Os completed (+3103) 00:33:18.301 00:33:19.683 QEMU NVMe Ctrl (12340 ): 11252 I/Os completed (+3004) 00:33:19.683 QEMU NVMe Ctrl (12341 ): 11123 I/Os completed (+3066) 00:33:19.683 00:33:20.643 QEMU NVMe Ctrl (12340 ): 14330 I/Os completed (+3078) 00:33:20.643 QEMU NVMe Ctrl (12341 ): 14199 I/Os completed (+3076) 00:33:20.643 00:33:21.588 QEMU NVMe Ctrl (12340 ): 17350 I/Os completed (+3020) 00:33:21.588 QEMU NVMe Ctrl (12341 ): 17226 I/Os completed (+3027) 00:33:21.588 00:33:22.530 QEMU NVMe Ctrl (12340 ): 20500 I/Os completed (+3150) 00:33:22.530 QEMU NVMe Ctrl (12341 ): 20420 I/Os completed (+3194) 00:33:22.530 00:33:23.474 QEMU NVMe Ctrl (12340 ): 23607 I/Os completed (+3107) 00:33:23.474 QEMU NVMe Ctrl (12341 ): 23548 I/Os completed (+3128) 00:33:23.474 00:33:24.415 QEMU NVMe Ctrl (12340 ): 26704 I/Os completed (+3097) 00:33:24.415 QEMU NVMe Ctrl (12341 ): 26606 I/Os completed (+3058) 00:33:24.415 00:33:25.355 QEMU NVMe Ctrl (12340 ): 29784 I/Os completed (+3080) 00:33:25.355 QEMU NVMe Ctrl (12341 ): 29682 I/Os completed (+3076) 00:33:25.355 00:33:26.303 QEMU NVMe Ctrl (12340 ): 32849 I/Os completed (+3065) 00:33:26.303 QEMU NVMe Ctrl (12341 ): 32775 I/Os completed (+3093) 00:33:26.303 00:33:27.686 QEMU NVMe Ctrl (12340 ): 35848 I/Os completed (+2999) 00:33:27.686 QEMU NVMe Ctrl (12341 ): 35743 I/Os completed (+2968) 00:33:27.686 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:27.686 [2024-12-09 23:15:08.243805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:27.686 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:27.686 [2024-12-09 23:15:08.246590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.246649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.246671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.246689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:27.686 [2024-12-09 23:15:08.248591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.248638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.248654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.248668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:27.686 [2024-12-09 23:15:08.266325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:27.686 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:27.686 [2024-12-09 23:15:08.267374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.267419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.267438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.267452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:27.686 [2024-12-09 23:15:08.269102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.269142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.269161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 [2024-12-09 23:15:08.269173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:27.686 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:27.686 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:27.686 EAL: Scan for (pci) bus failed. 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:27.950 Attaching to 0000:00:10.0 00:33:27.950 Attached to 0000:00:10.0 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:27.950 23:15:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:27.950 Attaching to 0000:00:11.0 00:33:27.950 Attached to 0000:00:11.0 00:33:27.950 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:27.950 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:27.950 [2024-12-09 23:15:08.487530] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:33:40.257 23:15:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:40.257 23:15:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:40.257 23:15:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.78 00:33:40.257 23:15:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.78 00:33:40.257 23:15:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:33:40.257 23:15:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.78 00:33:40.257 23:15:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.78 2 00:33:40.257 remove_attach_helper took 42.78s to complete (handling 2 nvme drive(s)) 23:15:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66745 00:33:46.901 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66745) - No such process 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66745 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:33:46.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67293 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67293 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67293 ']' 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.901 23:15:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.901 23:15:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:46.901 [2024-12-09 23:15:26.568605] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:46.901 [2024-12-09 23:15:26.568729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67293 ] 00:33:46.901 [2024-12-09 23:15:26.726768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.901 [2024-12-09 23:15:26.825873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.901 23:15:27 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.901 23:15:27 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:33:46.902 23:15:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:46.902 23:15:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:53.485 23:15:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.485 23:15:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:53.485 23:15:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:53.485 23:15:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:53.485 [2024-12-09 23:15:33.518197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:53.485 [2024-12-09 23:15:33.519606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.519645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.519659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.519677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.519685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.519701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.519710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.519717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.519729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.519736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.918190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:53.485 [2024-12-09 23:15:33.919592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.919624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.919637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.919653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.919662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.919669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.919678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.919685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-12-09 23:15:33.919701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:53.485 [2024-12-09 23:15:33.919709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.485 [2024-12-09 23:15:33.919715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:53.485 23:15:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.485 23:15:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:53.485 23:15:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:53.485 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:53.747 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:53.747 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:53.747 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:53.747 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:53.747 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:53.748 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:53.748 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:53.748 23:15:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:05.979 [2024-12-09 23:15:46.318403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:05.979 [2024-12-09 23:15:46.319994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:05.979 [2024-12-09 23:15:46.320029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.979 [2024-12-09 23:15:46.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.979 [2024-12-09 23:15:46.320060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:05.979 [2024-12-09 23:15:46.320068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.979 [2024-12-09 23:15:46.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.979 [2024-12-09 23:15:46.320085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:05.979 [2024-12-09 23:15:46.320093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.979 [2024-12-09 23:15:46.320100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.979 [2024-12-09 23:15:46.320108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:05.979 [2024-12-09 23:15:46.320115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.979 [2024-12-09 23:15:46.320123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:05.979 23:15:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:05.979 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:06.240 [2024-12-09 23:15:46.818412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:06.240 [2024-12-09 23:15:46.819819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:06.240 [2024-12-09 23:15:46.819854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.240 [2024-12-09 23:15:46.819868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.240 [2024-12-09 23:15:46.819884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:06.240 [2024-12-09 23:15:46.819894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.240 [2024-12-09 23:15:46.819901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.240 [2024-12-09 23:15:46.819910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:06.240 [2024-12-09 23:15:46.819916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.240 [2024-12-09 23:15:46.819925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.240 [2024-12-09 23:15:46.819933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:06.240 [2024-12-09 23:15:46.819941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.240 [2024-12-09 23:15:46.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:06.500 23:15:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.500 23:15:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:06.500 23:15:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:06.500 23:15:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:06.500 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:06.500 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:06.500 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:06.500 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:06.500 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:06.760 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:06.760 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:06.760 23:15:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:18.997 [2024-12-09 23:15:59.218649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:18.997 [2024-12-09 23:15:59.220675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:18.997 [2024-12-09 23:15:59.220813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.997 [2024-12-09 23:15:59.220889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.997 [2024-12-09 23:15:59.220918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:18.997 [2024-12-09 23:15:59.220930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.997 [2024-12-09 23:15:59.220944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.997 [2024-12-09 23:15:59.220953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:18.997 [2024-12-09 23:15:59.220964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.997 [2024-12-09 23:15:59.220972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.997 [2024-12-09 23:15:59.220997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:18.997 [2024-12-09 23:15:59.221008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.997 [2024-12-09 23:15:59.221018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 23:15:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:18.997 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:19.259 [2024-12-09 23:15:59.718662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:19.259 [2024-12-09 23:15:59.720676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.259 [2024-12-09 23:15:59.720717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.259 [2024-12-09 23:15:59.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.259 [2024-12-09 23:15:59.720757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.259 [2024-12-09 23:15:59.720770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.259 [2024-12-09 23:15:59.720779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.259 [2024-12-09 23:15:59.720791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.259 [2024-12-09 23:15:59.720799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.259 [2024-12-09 23:15:59.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.259 [2024-12-09 23:15:59.720822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.259 [2024-12-09 23:15:59.720832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.259 [2024-12-09 23:15:59.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:19.259 23:15:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:19.259 23:15:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:19.259 23:15:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:19.259 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:19.521 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:19.521 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:19.521 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:19.521 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:19.521 23:15:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:19.521 23:16:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:19.521 23:16:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:19.521 23:16:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:31.750 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:31.750 23:16:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.62 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.62 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:34:31.751 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:34:31.751 23:16:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:31.751 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:38.394 [2024-12-09 23:16:18.168667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:38.394 [2024-12-09 23:16:18.170063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.170105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.170122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.170148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.170158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.170175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.170186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.170198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.170208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.170221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.170231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.170244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.568669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:38.394 [2024-12-09 23:16:18.570094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.570132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.570148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.570169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.570182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.570191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.570203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.570212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.570223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 [2024-12-09 23:16:18.570232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:38.394 [2024-12-09 23:16:18.570243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.394 [2024-12-09 23:16:18.570252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:38.394 23:16:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:38.394 23:16:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:50.609 23:16:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.609 23:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:50.609 23:16:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:50.609 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:50.609 [2024-12-09 23:16:31.068863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:50.609 [2024-12-09 23:16:31.070133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.609 [2024-12-09 23:16:31.070175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.609 [2024-12-09 23:16:31.070190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.610 [2024-12-09 23:16:31.070212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.610 [2024-12-09 23:16:31.070222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.610 [2024-12-09 23:16:31.070233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.610 [2024-12-09 23:16:31.070242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.610 [2024-12-09 23:16:31.070252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.610 [2024-12-09 23:16:31.070261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.610 [2024-12-09 23:16:31.070271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.610 [2024-12-09 23:16:31.070280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.610 [2024-12-09 23:16:31.070290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.610 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:50.610 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:50.610 23:16:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.610 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:50.610 23:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:50.610 23:16:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.610 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:50.610 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:50.869 [2024-12-09 23:16:31.468876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:50.869 [2024-12-09 23:16:31.470421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.869 [2024-12-09 23:16:31.470458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.869 [2024-12-09 23:16:31.470473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.869 [2024-12-09 23:16:31.470491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.869 [2024-12-09 23:16:31.470505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.869 [2024-12-09 23:16:31.470514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.869 [2024-12-09 23:16:31.470525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.869 [2024-12-09 23:16:31.470534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.869 [2024-12-09 23:16:31.470546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.869 [2024-12-09 23:16:31.470555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.869 [2024-12-09 23:16:31.470565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.869 [2024-12-09 23:16:31.470574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:51.129 23:16:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.129 23:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:51.129 23:16:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.129 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.390 23:16:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:03.612 23:16:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.612 23:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:03.612 23:16:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:03.612 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:03.612 [2024-12-09 23:16:43.969073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:03.612 [2024-12-09 23:16:43.970616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.612 [2024-12-09 23:16:43.970653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.612 [2024-12-09 23:16:43.970665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.612 [2024-12-09 23:16:43.970686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.612 [2024-12-09 23:16:43.970693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.612 [2024-12-09 23:16:43.970702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.612 [2024-12-09 23:16:43.970710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.612 [2024-12-09 23:16:43.970721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.612 [2024-12-09 23:16:43.970728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.612 [2024-12-09 23:16:43.970737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.612 [2024-12-09 23:16:43.970743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.612 [2024-12-09 23:16:43.970751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:03.613 23:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:03.613 23:16:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.613 23:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:03.613 23:16:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.613 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:35:03.613 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:03.871 [2024-12-09 23:16:44.469082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:03.871 [2024-12-09 23:16:44.472323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.871 [2024-12-09 23:16:44.472364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.871 [2024-12-09 23:16:44.472377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.871 [2024-12-09 23:16:44.472396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.871 [2024-12-09 23:16:44.472407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.871 [2024-12-09 23:16:44.472415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.871 [2024-12-09 23:16:44.472424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.871 [2024-12-09 23:16:44.472431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.871 [2024-12-09 23:16:44.472440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.871 [2024-12-09 23:16:44.472448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.871 [2024-12-09 23:16:44.472459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.871 [2024-12-09 23:16:44.472466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.139 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:04.140 23:16:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.140 23:16:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:04.140 23:16:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:04.140 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:04.401 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:04.401 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:04.401 23:16:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:16.605 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:16.605 23:16:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.605 23:16:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.606 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:16.606 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.74 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.74 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:35:16.606 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.74 00:35:16.606 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.74 2 00:35:16.606 remove_attach_helper took 44.74s to complete (handling 2 nvme drive(s)) 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:35:16.606 23:16:56 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67293 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67293 ']' 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67293 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67293 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.606 killing process with pid 67293 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67293' 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67293 00:35:16.606 23:16:56 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67293 00:35:17.546 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:17.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:18.378 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:18.378 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:18.378 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:18.378 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:18.378 00:35:18.378 real 2m28.503s 00:35:18.378 user 1m50.598s 00:35:18.378 sys 0m16.518s 00:35:18.378 23:16:58 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.378 ************************************ 00:35:18.378 END TEST sw_hotplug 00:35:18.378 ************************************ 00:35:18.378 23:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:18.378 23:16:58 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:35:18.378 23:16:58 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:18.378 23:16:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:18.378 23:16:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.378 23:16:58 -- common/autotest_common.sh@10 -- # set +x 00:35:18.378 ************************************ 00:35:18.378 START TEST nvme_xnvme 00:35:18.378 ************************************ 00:35:18.378 23:16:58 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:18.638 * Looking for test storage... 00:35:18.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.638 23:16:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.638 --rc genhtml_branch_coverage=1 00:35:18.638 --rc genhtml_function_coverage=1 00:35:18.638 --rc genhtml_legend=1 00:35:18.638 --rc geninfo_all_blocks=1 00:35:18.638 --rc geninfo_unexecuted_blocks=1 00:35:18.638 00:35:18.638 ' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.638 --rc genhtml_branch_coverage=1 00:35:18.638 --rc genhtml_function_coverage=1 00:35:18.638 --rc genhtml_legend=1 00:35:18.638 --rc geninfo_all_blocks=1 00:35:18.638 --rc geninfo_unexecuted_blocks=1 00:35:18.638 00:35:18.638 ' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.638 --rc genhtml_branch_coverage=1 00:35:18.638 --rc genhtml_function_coverage=1 00:35:18.638 --rc genhtml_legend=1 00:35:18.638 --rc geninfo_all_blocks=1 00:35:18.638 --rc geninfo_unexecuted_blocks=1 00:35:18.638 00:35:18.638 ' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.638 --rc genhtml_branch_coverage=1 00:35:18.638 --rc genhtml_function_coverage=1 00:35:18.638 --rc genhtml_legend=1 00:35:18.638 --rc geninfo_all_blocks=1 00:35:18.638 --rc geninfo_unexecuted_blocks=1 00:35:18.638 00:35:18.638 ' 00:35:18.638 23:16:59 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:35:18.638 23:16:59 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:18.638 23:16:59 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:35:18.638 23:16:59 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:35:18.639 23:16:59 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:35:18.639 23:16:59 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:35:18.639 #define SPDK_CONFIG_H 00:35:18.639 #define SPDK_CONFIG_AIO_FSDEV 1 00:35:18.639 #define SPDK_CONFIG_APPS 1 00:35:18.639 #define SPDK_CONFIG_ARCH native 00:35:18.639 #define SPDK_CONFIG_ASAN 1 00:35:18.639 #undef SPDK_CONFIG_AVAHI 00:35:18.639 #undef SPDK_CONFIG_CET 00:35:18.639 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:35:18.639 #define SPDK_CONFIG_COVERAGE 1 00:35:18.639 #define SPDK_CONFIG_CROSS_PREFIX 00:35:18.639 #undef SPDK_CONFIG_CRYPTO 00:35:18.639 #undef SPDK_CONFIG_CRYPTO_MLX5 00:35:18.639 #undef SPDK_CONFIG_CUSTOMOCF 00:35:18.639 #undef SPDK_CONFIG_DAOS 00:35:18.639 #define SPDK_CONFIG_DAOS_DIR 00:35:18.639 #define SPDK_CONFIG_DEBUG 1 00:35:18.639 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:35:18.639 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:18.639 #define SPDK_CONFIG_DPDK_INC_DIR 00:35:18.639 #define SPDK_CONFIG_DPDK_LIB_DIR 00:35:18.639 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:35:18.639 #undef SPDK_CONFIG_DPDK_UADK 00:35:18.639 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:18.639 #define SPDK_CONFIG_EXAMPLES 1 00:35:18.639 #undef SPDK_CONFIG_FC 00:35:18.639 #define SPDK_CONFIG_FC_PATH 00:35:18.639 #define SPDK_CONFIG_FIO_PLUGIN 1 00:35:18.639 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:35:18.639 #define SPDK_CONFIG_FSDEV 1 00:35:18.639 #undef SPDK_CONFIG_FUSE 00:35:18.639 #undef SPDK_CONFIG_FUZZER 00:35:18.639 #define SPDK_CONFIG_FUZZER_LIB 00:35:18.639 #undef SPDK_CONFIG_GOLANG 00:35:18.639 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:35:18.639 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:35:18.639 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:35:18.639 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:35:18.639 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:35:18.639 #undef SPDK_CONFIG_HAVE_LIBBSD 00:35:18.639 #undef SPDK_CONFIG_HAVE_LZ4 00:35:18.639 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:35:18.639 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:35:18.639 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:35:18.639 #define SPDK_CONFIG_IDXD 1 00:35:18.639 #define SPDK_CONFIG_IDXD_KERNEL 1 00:35:18.639 #undef SPDK_CONFIG_IPSEC_MB 00:35:18.639 #define SPDK_CONFIG_IPSEC_MB_DIR 00:35:18.639 #define SPDK_CONFIG_ISAL 1 00:35:18.639 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:35:18.639 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:35:18.639 #define SPDK_CONFIG_LIBDIR 00:35:18.639 #undef SPDK_CONFIG_LTO 00:35:18.639 #define SPDK_CONFIG_MAX_LCORES 128 00:35:18.639 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:35:18.639 #define SPDK_CONFIG_NVME_CUSE 1 00:35:18.639 #undef SPDK_CONFIG_OCF 00:35:18.639 #define SPDK_CONFIG_OCF_PATH 00:35:18.639 #define SPDK_CONFIG_OPENSSL_PATH 00:35:18.639 #undef SPDK_CONFIG_PGO_CAPTURE 00:35:18.639 #define SPDK_CONFIG_PGO_DIR 00:35:18.639 #undef SPDK_CONFIG_PGO_USE 00:35:18.639 #define SPDK_CONFIG_PREFIX /usr/local 00:35:18.639 #undef SPDK_CONFIG_RAID5F 00:35:18.639 #undef SPDK_CONFIG_RBD 00:35:18.639 #define SPDK_CONFIG_RDMA 1 00:35:18.639 #define SPDK_CONFIG_RDMA_PROV verbs 00:35:18.639 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:35:18.639 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:35:18.639 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:35:18.639 #define SPDK_CONFIG_SHARED 1 00:35:18.639 #undef SPDK_CONFIG_SMA 00:35:18.639 #define SPDK_CONFIG_TESTS 1 00:35:18.639 #undef SPDK_CONFIG_TSAN 00:35:18.639 #define SPDK_CONFIG_UBLK 1 00:35:18.639 #define SPDK_CONFIG_UBSAN 1 00:35:18.639 #undef SPDK_CONFIG_UNIT_TESTS 00:35:18.639 #undef SPDK_CONFIG_URING 00:35:18.639 #define SPDK_CONFIG_URING_PATH 00:35:18.639 #undef SPDK_CONFIG_URING_ZNS 00:35:18.639 #undef SPDK_CONFIG_USDT 00:35:18.639 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:35:18.639 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:35:18.639 #undef SPDK_CONFIG_VFIO_USER 00:35:18.639 #define SPDK_CONFIG_VFIO_USER_DIR 00:35:18.639 #define SPDK_CONFIG_VHOST 1 00:35:18.639 #define SPDK_CONFIG_VIRTIO 1 00:35:18.639 #undef SPDK_CONFIG_VTUNE 00:35:18.639 #define SPDK_CONFIG_VTUNE_DIR 00:35:18.639 #define SPDK_CONFIG_WERROR 1 00:35:18.639 #define SPDK_CONFIG_WPDK_DIR 00:35:18.639 #define SPDK_CONFIG_XNVME 1 00:35:18.639 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:35:18.639 23:16:59 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:35:18.639 23:16:59 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:18.639 23:16:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.639 23:16:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.639 23:16:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.639 23:16:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.639 23:16:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.639 23:16:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.639 23:16:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.639 23:16:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:18.639 23:16:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.639 23:16:59 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@68 -- # uname -s 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:35:18.639 23:16:59 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:35:18.640 23:16:59 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:18.640 23:16:59 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68634 ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68634 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Wcxa1M 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Wcxa1M/tests/xnvme /tmp/spdk.Wcxa1M 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974355968 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593530368 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974355968 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593530368 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:35:18.641 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_4/fedora39-libvirt/output 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95005143040 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4697636864 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:35:18.642 * Looking for test storage... 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974355968 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:35:18.642 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:18.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.901 --rc genhtml_branch_coverage=1 00:35:18.901 --rc genhtml_function_coverage=1 00:35:18.901 --rc genhtml_legend=1 00:35:18.901 --rc geninfo_all_blocks=1 00:35:18.901 --rc geninfo_unexecuted_blocks=1 00:35:18.901 00:35:18.901 ' 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:18.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.901 --rc genhtml_branch_coverage=1 00:35:18.901 --rc genhtml_function_coverage=1 00:35:18.901 --rc genhtml_legend=1 00:35:18.901 --rc geninfo_all_blocks=1 00:35:18.901 --rc geninfo_unexecuted_blocks=1 00:35:18.901 00:35:18.901 ' 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:18.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.901 --rc genhtml_branch_coverage=1 00:35:18.901 --rc genhtml_function_coverage=1 00:35:18.901 --rc genhtml_legend=1 00:35:18.901 --rc geninfo_all_blocks=1 00:35:18.901 --rc geninfo_unexecuted_blocks=1 00:35:18.901 00:35:18.901 ' 00:35:18.901 23:16:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:18.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.901 --rc genhtml_branch_coverage=1 00:35:18.901 --rc genhtml_function_coverage=1 00:35:18.901 --rc genhtml_legend=1 00:35:18.901 --rc geninfo_all_blocks=1 00:35:18.901 --rc geninfo_unexecuted_blocks=1 00:35:18.901 00:35:18.901 ' 00:35:18.901 23:16:59 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.901 23:16:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.901 23:16:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.901 23:16:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.901 23:16:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.901 23:16:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:18.901 23:16:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:35:18.901 23:16:59 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:19.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:19.160 Waiting for block devices as requested 00:35:19.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:19.418 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:19.418 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:19.419 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:24.682 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:24.682 23:17:05 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:35:24.940 23:17:05 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:35:24.940 23:17:05 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:35:25.198 23:17:05 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:35:25.198 No valid GPT data, bailing 00:35:25.198 23:17:05 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:35:25.198 23:17:05 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:35:25.198 23:17:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:35:25.198 23:17:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:25.198 23:17:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.198 23:17:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:25.198 ************************************ 00:35:25.198 START TEST xnvme_rpc 00:35:25.198 ************************************ 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69020 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69020 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69020 ']' 00:35:25.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:25.198 23:17:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:25.456 [2024-12-09 23:17:05.858368] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:25.456 [2024-12-09 23:17:05.858498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69020 ] 00:35:25.456 [2024-12-09 23:17:06.018533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.714 [2024-12-09 23:17:06.120441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.279 xnvme_bdev 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:35:26.279 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69020 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69020 ']' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69020 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69020 00:35:26.280 killing process with pid 69020 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69020' 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69020 00:35:26.280 23:17:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69020 00:35:28.187 00:35:28.187 real 0m2.860s 00:35:28.187 user 0m2.960s 00:35:28.187 sys 0m0.374s 00:35:28.187 23:17:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.187 ************************************ 00:35:28.187 END TEST xnvme_rpc 00:35:28.187 ************************************ 00:35:28.187 23:17:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:28.187 23:17:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:28.187 23:17:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:28.187 23:17:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.187 23:17:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:28.187 ************************************ 00:35:28.187 START TEST xnvme_bdevperf 00:35:28.187 ************************************ 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:28.187 23:17:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.187 { 00:35:28.187 "subsystems": [ 00:35:28.187 { 00:35:28.187 "subsystem": "bdev", 00:35:28.187 "config": [ 00:35:28.187 { 00:35:28.187 "params": { 00:35:28.187 "io_mechanism": "libaio", 00:35:28.187 "conserve_cpu": false, 00:35:28.187 "filename": "/dev/nvme0n1", 00:35:28.187 "name": "xnvme_bdev" 00:35:28.187 }, 00:35:28.187 "method": "bdev_xnvme_create" 00:35:28.187 }, 00:35:28.187 { 00:35:28.187 "method": "bdev_wait_for_examine" 00:35:28.187 } 00:35:28.187 ] 00:35:28.187 } 00:35:28.187 ] 00:35:28.187 } 00:35:28.187 [2024-12-09 23:17:08.791811] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:28.187 [2024-12-09 23:17:08.791965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69094 ] 00:35:28.446 [2024-12-09 23:17:08.956474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.446 [2024-12-09 23:17:09.059100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.702 Running I/O for 5 seconds... 00:35:31.010 35041.00 IOPS, 136.88 MiB/s [2024-12-09T23:17:12.580Z] 35527.50 IOPS, 138.78 MiB/s [2024-12-09T23:17:13.513Z] 36641.00 IOPS, 143.13 MiB/s [2024-12-09T23:17:14.445Z] 37117.50 IOPS, 144.99 MiB/s 00:35:33.809 Latency(us) 00:35:33.809 [2024-12-09T23:17:14.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.809 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:33.809 xnvme_bdev : 5.00 36273.20 141.69 0.00 0.00 1760.01 258.36 7561.85 00:35:33.809 [2024-12-09T23:17:14.445Z] =================================================================================================================== 00:35:33.809 [2024-12-09T23:17:14.445Z] Total : 36273.20 141.69 0.00 0.00 1760.01 258.36 7561.85 00:35:34.742 23:17:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:34.742 23:17:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:35:34.742 23:17:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:35:34.742 23:17:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:34.742 23:17:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.742 { 00:35:34.742 "subsystems": [ 00:35:34.742 { 00:35:34.742 "subsystem": "bdev", 00:35:34.742 "config": [ 00:35:34.742 { 00:35:34.742 "params": { 00:35:34.742 "io_mechanism": "libaio", 00:35:34.742 "conserve_cpu": false, 00:35:34.742 "filename": "/dev/nvme0n1", 00:35:34.742 "name": "xnvme_bdev" 00:35:34.742 }, 00:35:34.742 "method": "bdev_xnvme_create" 00:35:34.742 }, 00:35:34.742 { 00:35:34.742 "method": "bdev_wait_for_examine" 00:35:34.742 } 00:35:34.742 ] 00:35:34.742 } 00:35:34.742 ] 00:35:34.742 } 00:35:34.742 [2024-12-09 23:17:15.127074] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:34.742 [2024-12-09 23:17:15.127184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69169 ] 00:35:34.742 [2024-12-09 23:17:15.285500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.000 [2024-12-09 23:17:15.386959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.258 Running I/O for 5 seconds... 00:35:37.125 34693.00 IOPS, 135.52 MiB/s [2024-12-09T23:17:18.697Z] 35131.00 IOPS, 137.23 MiB/s [2024-12-09T23:17:19.697Z] 35127.33 IOPS, 137.22 MiB/s [2024-12-09T23:17:21.071Z] 34725.50 IOPS, 135.65 MiB/s 00:35:40.435 Latency(us) 00:35:40.435 [2024-12-09T23:17:21.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.435 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:35:40.435 xnvme_bdev : 5.00 34595.88 135.14 0.00 0.00 1845.15 206.38 8065.97 00:35:40.435 [2024-12-09T23:17:21.071Z] =================================================================================================================== 00:35:40.435 [2024-12-09T23:17:21.071Z] Total : 34595.88 135.14 0.00 0.00 1845.15 206.38 8065.97 00:35:41.001 00:35:41.001 real 0m12.672s 00:35:41.001 user 0m4.775s 00:35:41.002 sys 0m5.764s 00:35:41.002 ************************************ 00:35:41.002 23:17:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.002 23:17:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.002 END TEST xnvme_bdevperf 00:35:41.002 ************************************ 00:35:41.002 23:17:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:35:41.002 23:17:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:41.002 23:17:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:41.002 23:17:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:41.002 ************************************ 00:35:41.002 START TEST xnvme_fio_plugin 00:35:41.002 ************************************ 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:41.002 23:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:41.002 { 00:35:41.002 "subsystems": [ 00:35:41.002 { 00:35:41.002 "subsystem": "bdev", 00:35:41.002 "config": [ 00:35:41.002 { 00:35:41.002 "params": { 00:35:41.002 "io_mechanism": "libaio", 00:35:41.002 "conserve_cpu": false, 00:35:41.002 "filename": "/dev/nvme0n1", 00:35:41.002 "name": "xnvme_bdev" 00:35:41.002 }, 00:35:41.002 "method": "bdev_xnvme_create" 00:35:41.002 }, 00:35:41.002 { 00:35:41.002 "method": "bdev_wait_for_examine" 00:35:41.002 } 00:35:41.002 ] 00:35:41.002 } 00:35:41.002 ] 00:35:41.002 } 00:35:41.002 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:35:41.002 fio-3.35 00:35:41.002 Starting 1 thread 00:35:47.585 00:35:47.585 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69283: Mon Dec 9 23:17:27 2024 00:35:47.585 read: IOPS=32.1k, BW=126MiB/s (132MB/s)(628MiB/5001msec) 00:35:47.585 slat (usec): min=3, max=2150, avg=21.07, stdev=94.63 00:35:47.585 clat (usec): min=97, max=8682, avg=1409.98, stdev=553.25 00:35:47.585 lat (usec): min=170, max=8687, avg=1431.05, stdev=544.16 00:35:47.585 clat percentiles (usec): 00:35:47.585 | 1.00th=[ 273], 5.00th=[ 537], 10.00th=[ 701], 20.00th=[ 947], 00:35:47.585 | 30.00th=[ 1123], 40.00th=[ 1287], 50.00th=[ 1418], 60.00th=[ 1549], 00:35:47.585 | 70.00th=[ 1663], 80.00th=[ 1827], 90.00th=[ 2040], 95.00th=[ 2245], 00:35:47.585 | 99.00th=[ 3032], 99.50th=[ 3359], 99.90th=[ 4178], 99.95th=[ 4359], 00:35:47.585 | 99.99th=[ 6194] 00:35:47.585 bw ( KiB/s): min=118208, max=147072, per=100.00%, avg=128932.44, stdev=8637.83, samples=9 00:35:47.585 iops : min=29552, max=36768, avg=32233.11, stdev=2159.46, samples=9 00:35:47.585 lat (usec) : 100=0.01%, 250=0.69%, 500=3.67%, 750=7.31%, 1000=11.04% 00:35:47.585 lat (msec) : 2=65.73%, 4=11.41%, 10=0.15% 00:35:47.585 cpu : usr=44.78%, sys=46.36%, ctx=16, majf=0, minf=764 00:35:47.585 IO depths : 1=0.4%, 2=1.3%, 4=3.4%, 8=8.7%, 16=23.1%, 32=61.0%, >=64=2.1% 00:35:47.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.585 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:35:47.585 issued rwts: total=160759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:35:47.585 00:35:47.585 Run status group 0 (all jobs): 00:35:47.585 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=628MiB (658MB), run=5001-5001msec 00:35:47.847 ----------------------------------------------------- 00:35:47.847 Suppressions used: 00:35:47.847 count bytes template 00:35:47.847 1 11 /usr/src/fio/parse.c 00:35:47.847 1 8 libtcmalloc_minimal.so 00:35:47.847 1 904 libcrypto.so 00:35:47.847 ----------------------------------------------------- 00:35:47.847 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:47.847 23:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:47.847 { 00:35:47.847 "subsystems": [ 00:35:47.847 { 00:35:47.847 "subsystem": "bdev", 00:35:47.847 "config": [ 00:35:47.847 { 00:35:47.847 "params": { 00:35:47.847 "io_mechanism": "libaio", 00:35:47.847 "conserve_cpu": false, 00:35:47.847 "filename": "/dev/nvme0n1", 00:35:47.847 "name": "xnvme_bdev" 00:35:47.847 }, 00:35:47.847 "method": "bdev_xnvme_create" 00:35:47.847 }, 00:35:47.847 { 00:35:47.847 "method": "bdev_wait_for_examine" 00:35:47.847 } 00:35:47.847 ] 00:35:47.847 } 00:35:47.847 ] 00:35:47.847 } 00:35:48.109 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:35:48.109 fio-3.35 00:35:48.109 Starting 1 thread 00:35:54.694 00:35:54.694 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69379: Mon Dec 9 23:17:34 2024 00:35:54.694 write: IOPS=5578, BW=21.8MiB/s (22.8MB/s)(112MiB/5161msec); 0 zone resets 00:35:54.694 slat (usec): min=4, max=1953, avg=18.67, stdev=61.89 00:35:54.694 clat (usec): min=52, max=626834, avg=11059.43, stdev=42660.74 00:35:54.694 lat (usec): min=62, max=626859, avg=11078.10, stdev=42659.69 00:35:54.694 clat percentiles (usec): 00:35:54.694 | 1.00th=[ 119], 5.00th=[ 351], 10.00th=[ 510], 20.00th=[ 775], 00:35:54.694 | 30.00th=[ 955], 40.00th=[ 1106], 50.00th=[ 1303], 60.00th=[ 1549], 00:35:54.694 | 70.00th=[ 1876], 80.00th=[ 20317], 90.00th=[ 24773], 95.00th=[ 26870], 00:35:54.694 | 99.00th=[202376], 99.50th=[274727], 99.90th=[624952], 99.95th=[624952], 00:35:54.694 | 99.99th=[624952] 00:35:54.694 bw ( KiB/s): min=14664, max=39920, per=100.00%, avg=22980.00, stdev=11372.62, samples=10 00:35:54.694 iops : min= 3666, max= 9980, avg=5745.00, stdev=2843.16, samples=10 00:35:54.694 lat (usec) : 100=0.45%, 250=2.46%, 500=6.74%, 750=9.38%, 1000=14.34% 00:35:54.694 lat (msec) : 2=39.54%, 4=6.26%, 10=0.26%, 20=0.38%, 50=17.74% 00:35:54.694 lat (msec) : 100=0.24%, 250=1.57%, 500=0.44%, 750=0.22% 00:35:54.694 cpu : usr=89.75%, sys=6.59%, ctx=11, majf=0, minf=765 00:35:54.694 IO depths : 1=0.3%, 2=0.7%, 4=2.2%, 8=6.5%, 16=17.2%, 32=56.4%, >=64=16.7% 00:35:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 complete : 0=0.0%, 4=98.4%, 8=0.3%, 16=0.1%, 32=0.2%, 64=1.0%, >=64=0.0% 00:35:54.694 issued rwts: total=0,28789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:35:54.694 00:35:54.694 Run status group 0 (all jobs): 00:35:54.694 WRITE: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=112MiB (118MB), run=5161-5161msec 00:35:54.955 ----------------------------------------------------- 00:35:54.955 Suppressions used: 00:35:54.955 count bytes template 00:35:54.955 1 11 /usr/src/fio/parse.c 00:35:54.955 1 8 libtcmalloc_minimal.so 00:35:54.955 1 904 libcrypto.so 00:35:54.955 ----------------------------------------------------- 00:35:54.955 00:35:54.955 00:35:54.955 real 0m13.955s 00:35:54.955 user 0m9.703s 00:35:54.955 sys 0m3.212s 00:35:54.955 23:17:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.955 23:17:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:35:54.955 ************************************ 00:35:54.955 END TEST xnvme_fio_plugin 00:35:54.955 ************************************ 00:35:54.955 23:17:35 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:35:54.955 23:17:35 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:35:54.955 23:17:35 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:35:54.955 23:17:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:35:54.955 23:17:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:54.955 23:17:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:54.955 23:17:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:54.955 ************************************ 00:35:54.955 START TEST xnvme_rpc 00:35:54.955 ************************************ 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69461 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69461 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69461 ']' 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.955 23:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:54.955 [2024-12-09 23:17:35.566630] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:54.955 [2024-12-09 23:17:35.566784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69461 ] 00:35:55.216 [2024-12-09 23:17:35.728824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.476 [2024-12-09 23:17:35.858802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.055 xnvme_bdev 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.055 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69461 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69461 ']' 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69461 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69461 00:35:56.318 killing process with pid 69461 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69461' 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69461 00:35:56.318 23:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69461 00:35:58.229 ************************************ 00:35:58.229 END TEST xnvme_rpc 00:35:58.229 ************************************ 00:35:58.229 00:35:58.229 real 0m2.868s 00:35:58.229 user 0m2.850s 00:35:58.229 sys 0m0.476s 00:35:58.229 23:17:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.229 23:17:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:58.229 23:17:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:58.229 23:17:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:58.229 23:17:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:58.229 23:17:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:58.229 ************************************ 00:35:58.229 START TEST xnvme_bdevperf 00:35:58.229 ************************************ 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:58.229 23:17:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:58.229 { 00:35:58.229 "subsystems": [ 00:35:58.229 { 00:35:58.229 "subsystem": "bdev", 00:35:58.229 "config": [ 00:35:58.229 { 00:35:58.229 "params": { 00:35:58.229 "io_mechanism": "libaio", 00:35:58.229 "conserve_cpu": true, 00:35:58.229 "filename": "/dev/nvme0n1", 00:35:58.229 "name": "xnvme_bdev" 00:35:58.229 }, 00:35:58.229 "method": "bdev_xnvme_create" 00:35:58.229 }, 00:35:58.229 { 00:35:58.229 "method": "bdev_wait_for_examine" 00:35:58.229 } 00:35:58.229 ] 00:35:58.229 } 00:35:58.229 ] 00:35:58.229 } 00:35:58.229 [2024-12-09 23:17:38.490575] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:58.229 [2024-12-09 23:17:38.490729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69538 ] 00:35:58.229 [2024-12-09 23:17:38.657007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.229 [2024-12-09 23:17:38.791249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.489 Running I/O for 5 seconds... 00:36:00.819 14453.00 IOPS, 56.46 MiB/s [2024-12-09T23:17:42.495Z] 12487.50 IOPS, 48.78 MiB/s [2024-12-09T23:17:43.452Z] 13087.33 IOPS, 51.12 MiB/s [2024-12-09T23:17:44.421Z] 15545.25 IOPS, 60.72 MiB/s [2024-12-09T23:17:44.421Z] 18558.40 IOPS, 72.49 MiB/s 00:36:03.785 Latency(us) 00:36:03.785 [2024-12-09T23:17:44.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.785 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:03.785 xnvme_bdev : 5.01 18561.28 72.50 0.00 0.00 3443.57 55.53 179871.11 00:36:03.785 [2024-12-09T23:17:44.421Z] =================================================================================================================== 00:36:03.785 [2024-12-09T23:17:44.421Z] Total : 18561.28 72.50 0.00 0.00 3443.57 55.53 179871.11 00:36:04.357 23:17:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:04.357 23:17:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:04.357 23:17:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:04.357 23:17:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:04.357 23:17:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.357 { 00:36:04.357 "subsystems": [ 00:36:04.357 { 00:36:04.357 "subsystem": "bdev", 00:36:04.357 "config": [ 00:36:04.357 { 00:36:04.357 "params": { 00:36:04.357 "io_mechanism": "libaio", 00:36:04.357 "conserve_cpu": true, 00:36:04.357 "filename": "/dev/nvme0n1", 00:36:04.357 "name": "xnvme_bdev" 00:36:04.357 }, 00:36:04.357 "method": "bdev_xnvme_create" 00:36:04.357 }, 00:36:04.357 { 00:36:04.357 "method": "bdev_wait_for_examine" 00:36:04.357 } 00:36:04.357 ] 00:36:04.357 } 00:36:04.357 ] 00:36:04.357 } 00:36:04.357 [2024-12-09 23:17:44.980800] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:04.357 [2024-12-09 23:17:44.980911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69613 ] 00:36:04.617 [2024-12-09 23:17:45.139743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.617 [2024-12-09 23:17:45.235343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.878 Running I/O for 5 seconds... 00:36:07.207 3300.00 IOPS, 12.89 MiB/s [2024-12-09T23:17:48.787Z] 3297.50 IOPS, 12.88 MiB/s [2024-12-09T23:17:49.731Z] 3295.00 IOPS, 12.87 MiB/s [2024-12-09T23:17:50.674Z] 3820.75 IOPS, 14.92 MiB/s [2024-12-09T23:17:50.674Z] 3632.60 IOPS, 14.19 MiB/s 00:36:10.038 Latency(us) 00:36:10.038 [2024-12-09T23:17:50.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.038 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:36:10.038 xnvme_bdev : 5.02 3631.51 14.19 0.00 0.00 17580.69 51.99 40934.79 00:36:10.038 [2024-12-09T23:17:50.674Z] =================================================================================================================== 00:36:10.038 [2024-12-09T23:17:50.674Z] Total : 3631.51 14.19 0.00 0.00 17580.69 51.99 40934.79 00:36:10.983 00:36:10.983 real 0m12.876s 00:36:10.983 user 0m10.039s 00:36:10.983 sys 0m2.088s 00:36:10.983 ************************************ 00:36:10.983 END TEST xnvme_bdevperf 00:36:10.983 ************************************ 00:36:10.983 23:17:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.983 23:17:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.983 23:17:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:36:10.983 23:17:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:10.983 23:17:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.983 23:17:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:10.983 ************************************ 00:36:10.983 START TEST xnvme_fio_plugin 00:36:10.983 ************************************ 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:10.983 23:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:10.983 { 00:36:10.983 "subsystems": [ 00:36:10.983 { 00:36:10.983 "subsystem": "bdev", 00:36:10.983 "config": [ 00:36:10.983 { 00:36:10.983 "params": { 00:36:10.983 "io_mechanism": "libaio", 00:36:10.983 "conserve_cpu": true, 00:36:10.983 "filename": "/dev/nvme0n1", 00:36:10.983 "name": "xnvme_bdev" 00:36:10.983 }, 00:36:10.983 "method": "bdev_xnvme_create" 00:36:10.983 }, 00:36:10.983 { 00:36:10.983 "method": "bdev_wait_for_examine" 00:36:10.983 } 00:36:10.983 ] 00:36:10.983 } 00:36:10.983 ] 00:36:10.983 } 00:36:10.983 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:10.983 fio-3.35 00:36:10.983 Starting 1 thread 00:36:17.597 00:36:17.597 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69731: Mon Dec 9 23:17:57 2024 00:36:17.597 read: IOPS=43.6k, BW=170MiB/s (179MB/s)(852MiB/5001msec) 00:36:17.597 slat (usec): min=3, max=724, avg=19.62, stdev=26.75 00:36:17.597 clat (usec): min=6, max=5946, avg=887.66, stdev=549.90 00:36:17.597 lat (usec): min=33, max=5950, avg=907.29, stdev=553.54 00:36:17.597 clat percentiles (usec): 00:36:17.597 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 322], 20.00th=[ 457], 00:36:17.597 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 898], 00:36:17.597 | 70.00th=[ 1029], 80.00th=[ 1221], 90.00th=[ 1532], 95.00th=[ 1926], 00:36:17.597 | 99.00th=[ 2933], 99.50th=[ 3261], 99.90th=[ 3851], 99.95th=[ 4080], 00:36:17.597 | 99.99th=[ 4555] 00:36:17.597 bw ( KiB/s): min=162288, max=187728, per=98.95%, avg=172706.00, stdev=9498.67, samples=9 00:36:17.597 iops : min=40572, max=46932, avg=43176.44, stdev=2374.71, samples=9 00:36:17.597 lat (usec) : 10=0.01%, 50=0.01%, 100=0.03%, 250=5.38%, 500=18.28% 00:36:17.597 lat (usec) : 750=23.44%, 1000=21.06% 00:36:17.597 lat (msec) : 2=27.31%, 4=4.42%, 10=0.06% 00:36:17.597 cpu : usr=27.54%, sys=52.60%, ctx=71, majf=0, minf=764 00:36:17.597 IO depths : 1=0.1%, 2=1.3%, 4=3.8%, 8=9.8%, 16=24.9%, 32=58.2%, >=64=1.9% 00:36:17.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.597 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:36:17.597 issued rwts: total=218209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:17.597 00:36:17.597 Run status group 0 (all jobs): 00:36:17.597 READ: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=852MiB (894MB), run=5001-5001msec 00:36:17.597 ----------------------------------------------------- 00:36:17.598 Suppressions used: 00:36:17.598 count bytes template 00:36:17.598 1 11 /usr/src/fio/parse.c 00:36:17.598 1 8 libtcmalloc_minimal.so 00:36:17.598 1 904 libcrypto.so 00:36:17.598 ----------------------------------------------------- 00:36:17.598 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.598 23:17:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:17.598 23:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:17.598 23:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:17.598 23:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:17.598 23:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:17.598 23:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:17.598 { 00:36:17.598 "subsystems": [ 00:36:17.598 { 00:36:17.598 "subsystem": "bdev", 00:36:17.598 "config": [ 00:36:17.598 { 00:36:17.598 "params": { 00:36:17.598 "io_mechanism": "libaio", 00:36:17.598 "conserve_cpu": true, 00:36:17.598 "filename": "/dev/nvme0n1", 00:36:17.598 "name": "xnvme_bdev" 00:36:17.598 }, 00:36:17.598 "method": "bdev_xnvme_create" 00:36:17.598 }, 00:36:17.598 { 00:36:17.598 "method": "bdev_wait_for_examine" 00:36:17.598 } 00:36:17.598 ] 00:36:17.598 } 00:36:17.598 ] 00:36:17.598 } 00:36:17.598 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:17.598 fio-3.35 00:36:17.598 Starting 1 thread 00:36:24.182 00:36:24.182 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69824: Mon Dec 9 23:18:03 2024 00:36:24.182 write: IOPS=16.1k, BW=63.0MiB/s (66.1MB/s)(316MiB/5011msec); 0 zone resets 00:36:24.182 slat (usec): min=4, max=1217, avg=17.21, stdev=32.57 00:36:24.182 clat (usec): min=11, max=656598, avg=3459.76, stdev=25830.14 00:36:24.182 lat (usec): min=64, max=656610, avg=3476.97, stdev=25829.54 00:36:24.182 clat percentiles (usec): 00:36:24.182 | 1.00th=[ 172], 5.00th=[ 255], 10.00th=[ 334], 20.00th=[ 474], 00:36:24.182 | 30.00th=[ 594], 40.00th=[ 693], 50.00th=[ 799], 60.00th=[ 906], 00:36:24.182 | 70.00th=[ 1012], 80.00th=[ 1139], 90.00th=[ 1401], 95.00th=[ 1844], 00:36:24.182 | 99.00th=[125305], 99.50th=[158335], 99.90th=[291505], 99.95th=[591397], 00:36:24.182 | 99.99th=[658506] 00:36:24.182 bw ( KiB/s): min= 176, max=163944, per=100.00%, avg=64616.00, stdev=41446.36, samples=10 00:36:24.182 iops : min= 44, max=40986, avg=16154.00, stdev=10361.59, samples=10 00:36:24.182 lat (usec) : 20=0.01%, 50=0.01%, 100=0.04%, 250=4.67%, 500=17.40% 00:36:24.182 lat (usec) : 750=23.05%, 1000=24.10% 00:36:24.182 lat (msec) : 2=26.43%, 4=2.83%, 10=0.02%, 100=0.30%, 250=0.84% 00:36:24.182 lat (msec) : 500=0.26%, 750=0.07% 00:36:24.182 cpu : usr=76.27%, sys=17.50%, ctx=35, majf=0, minf=765 00:36:24.182 IO depths : 1=0.3%, 2=1.4%, 4=4.2%, 8=10.5%, 16=24.3%, 32=57.2%, >=64=2.1% 00:36:24.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.182 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:36:24.182 issued rwts: total=0,80834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:24.182 00:36:24.182 Run status group 0 (all jobs): 00:36:24.182 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=316MiB (331MB), run=5011-5011msec 00:36:24.182 ----------------------------------------------------- 00:36:24.182 Suppressions used: 00:36:24.182 count bytes template 00:36:24.182 1 11 /usr/src/fio/parse.c 00:36:24.182 1 8 libtcmalloc_minimal.so 00:36:24.182 1 904 libcrypto.so 00:36:24.182 ----------------------------------------------------- 00:36:24.182 00:36:24.182 00:36:24.182 real 0m13.360s 00:36:24.182 user 0m7.687s 00:36:24.182 sys 0m3.973s 00:36:24.182 23:18:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.182 ************************************ 00:36:24.182 END TEST xnvme_fio_plugin 00:36:24.182 ************************************ 00:36:24.182 23:18:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:36:24.182 23:18:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:24.182 23:18:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.182 23:18:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.182 23:18:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:24.182 ************************************ 00:36:24.182 START TEST xnvme_rpc 00:36:24.182 ************************************ 00:36:24.182 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:24.182 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:24.182 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:24.182 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:24.182 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69905 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69905 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69905 ']' 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.183 23:18:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 [2024-12-09 23:18:04.812034] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:24.183 [2024-12-09 23:18:04.812147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69905 ] 00:36:24.442 [2024-12-09 23:18:04.967153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.443 [2024-12-09 23:18:05.061076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.010 xnvme_bdev 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.010 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69905 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69905 ']' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69905 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69905 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.273 killing process with pid 69905 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69905' 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69905 00:36:25.273 23:18:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69905 00:36:26.645 00:36:26.645 real 0m2.539s 00:36:26.645 user 0m2.671s 00:36:26.645 sys 0m0.319s 00:36:26.645 23:18:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.645 23:18:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:26.645 ************************************ 00:36:26.645 END TEST xnvme_rpc 00:36:26.645 ************************************ 00:36:26.903 23:18:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:26.903 23:18:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:26.903 23:18:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.903 23:18:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:26.903 ************************************ 00:36:26.903 START TEST xnvme_bdevperf 00:36:26.903 ************************************ 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:26.903 23:18:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:26.903 { 00:36:26.903 "subsystems": [ 00:36:26.903 { 00:36:26.903 "subsystem": "bdev", 00:36:26.903 "config": [ 00:36:26.903 { 00:36:26.903 "params": { 00:36:26.903 "io_mechanism": "io_uring", 00:36:26.903 "conserve_cpu": false, 00:36:26.903 "filename": "/dev/nvme0n1", 00:36:26.903 "name": "xnvme_bdev" 00:36:26.903 }, 00:36:26.903 "method": "bdev_xnvme_create" 00:36:26.903 }, 00:36:26.903 { 00:36:26.903 "method": "bdev_wait_for_examine" 00:36:26.903 } 00:36:26.903 ] 00:36:26.903 } 00:36:26.903 ] 00:36:26.903 } 00:36:26.903 [2024-12-09 23:18:07.378068] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:26.903 [2024-12-09 23:18:07.378182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69973 ] 00:36:26.903 [2024-12-09 23:18:07.534416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.196 [2024-12-09 23:18:07.628452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.453 Running I/O for 5 seconds... 00:36:29.446 55409.00 IOPS, 216.44 MiB/s [2024-12-09T23:18:11.016Z] 51372.00 IOPS, 200.67 MiB/s [2024-12-09T23:18:11.948Z] 54962.00 IOPS, 214.70 MiB/s [2024-12-09T23:18:12.882Z] 58226.75 IOPS, 227.45 MiB/s [2024-12-09T23:18:12.882Z] 59936.20 IOPS, 234.13 MiB/s 00:36:32.246 Latency(us) 00:36:32.246 [2024-12-09T23:18:12.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.246 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:32.246 xnvme_bdev : 5.00 59903.65 234.00 0.00 0.00 1064.27 48.25 188743.68 00:36:32.246 [2024-12-09T23:18:12.882Z] =================================================================================================================== 00:36:32.246 [2024-12-09T23:18:12.882Z] Total : 59903.65 234.00 0.00 0.00 1064.27 48.25 188743.68 00:36:33.182 23:18:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:33.182 23:18:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:33.182 23:18:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:33.182 23:18:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:33.182 23:18:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:33.182 { 00:36:33.182 "subsystems": [ 00:36:33.182 { 00:36:33.182 "subsystem": "bdev", 00:36:33.182 "config": [ 00:36:33.182 { 00:36:33.182 "params": { 00:36:33.182 "io_mechanism": "io_uring", 00:36:33.182 "conserve_cpu": false, 00:36:33.182 "filename": "/dev/nvme0n1", 00:36:33.182 "name": "xnvme_bdev" 00:36:33.182 }, 00:36:33.182 "method": "bdev_xnvme_create" 00:36:33.182 }, 00:36:33.182 { 00:36:33.182 "method": "bdev_wait_for_examine" 00:36:33.182 } 00:36:33.182 ] 00:36:33.182 } 00:36:33.182 ] 00:36:33.182 } 00:36:33.182 [2024-12-09 23:18:13.654995] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:33.182 [2024-12-09 23:18:13.655111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70049 ] 00:36:33.182 [2024-12-09 23:18:13.815067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.441 [2024-12-09 23:18:13.912199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.699 Running I/O for 5 seconds... 00:36:35.581 18681.00 IOPS, 72.97 MiB/s [2024-12-09T23:18:17.159Z] 12577.00 IOPS, 49.13 MiB/s [2024-12-09T23:18:18.541Z] 10836.67 IOPS, 42.33 MiB/s [2024-12-09T23:18:19.479Z] 10022.25 IOPS, 39.15 MiB/s [2024-12-09T23:18:19.479Z] 9586.00 IOPS, 37.45 MiB/s 00:36:38.843 Latency(us) 00:36:38.843 [2024-12-09T23:18:19.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.843 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:36:38.843 xnvme_bdev : 5.01 9584.29 37.44 0.00 0.00 6668.84 47.66 111310.38 00:36:38.843 [2024-12-09T23:18:19.479Z] =================================================================================================================== 00:36:38.843 [2024-12-09T23:18:19.479Z] Total : 9584.29 37.44 0.00 0.00 6668.84 47.66 111310.38 00:36:39.179 00:36:39.179 real 0m12.441s 00:36:39.179 user 0m5.956s 00:36:39.179 sys 0m6.282s 00:36:39.179 23:18:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.179 23:18:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.179 ************************************ 00:36:39.179 END TEST xnvme_bdevperf 00:36:39.179 ************************************ 00:36:39.179 23:18:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:36:39.179 23:18:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.179 23:18:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.179 23:18:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:39.453 ************************************ 00:36:39.453 START TEST xnvme_fio_plugin 00:36:39.453 ************************************ 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:39.453 23:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.453 { 00:36:39.453 "subsystems": [ 00:36:39.453 { 00:36:39.453 "subsystem": "bdev", 00:36:39.453 "config": [ 00:36:39.453 { 00:36:39.453 "params": { 00:36:39.453 "io_mechanism": "io_uring", 00:36:39.453 "conserve_cpu": false, 00:36:39.453 "filename": "/dev/nvme0n1", 00:36:39.453 "name": "xnvme_bdev" 00:36:39.453 }, 00:36:39.453 "method": "bdev_xnvme_create" 00:36:39.453 }, 00:36:39.453 { 00:36:39.453 "method": "bdev_wait_for_examine" 00:36:39.453 } 00:36:39.453 ] 00:36:39.453 } 00:36:39.453 ] 00:36:39.453 } 00:36:39.453 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:39.453 fio-3.35 00:36:39.453 Starting 1 thread 00:36:46.030 00:36:46.030 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70162: Mon Dec 9 23:18:25 2024 00:36:46.030 read: IOPS=62.6k, BW=244MiB/s (256MB/s)(1222MiB/5001msec) 00:36:46.030 slat (nsec): min=2873, max=59560, avg=3572.45, stdev=1225.49 00:36:46.030 clat (usec): min=51, max=30563, avg=895.90, stdev=342.28 00:36:46.030 lat (usec): min=54, max=30566, avg=899.47, stdev=342.42 00:36:46.030 clat percentiles (usec): 00:36:46.030 | 1.00th=[ 529], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 717], 00:36:46.030 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 857], 00:36:46.030 | 70.00th=[ 898], 80.00th=[ 996], 90.00th=[ 1172], 95.00th=[ 1385], 00:36:46.030 | 99.00th=[ 2212], 99.50th=[ 2638], 99.90th=[ 4178], 99.95th=[ 5145], 00:36:46.030 | 99.99th=[ 8848] 00:36:46.030 bw ( KiB/s): min=216960, max=275344, per=100.00%, avg=254802.67, stdev=22408.65, samples=9 00:36:46.030 iops : min=54240, max=68836, avg=63700.67, stdev=5602.16, samples=9 00:36:46.030 lat (usec) : 100=0.01%, 250=0.05%, 500=0.71%, 750=28.22%, 1000=51.28% 00:36:46.031 lat (msec) : 2=18.28%, 4=1.34%, 10=0.10%, 20=0.01%, 50=0.01% 00:36:46.031 cpu : usr=41.06%, sys=58.18%, ctx=41, majf=0, minf=762 00:36:46.031 IO depths : 1=1.1%, 2=2.3%, 4=4.9%, 8=10.8%, 16=24.3%, 32=54.7%, >=64=1.9% 00:36:46.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.031 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:46.031 issued rwts: total=312838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:46.031 00:36:46.031 Run status group 0 (all jobs): 00:36:46.031 READ: bw=244MiB/s (256MB/s), 244MiB/s-244MiB/s (256MB/s-256MB/s), io=1222MiB (1281MB), run=5001-5001msec 00:36:46.031 ----------------------------------------------------- 00:36:46.031 Suppressions used: 00:36:46.031 count bytes template 00:36:46.031 1 11 /usr/src/fio/parse.c 00:36:46.031 1 8 libtcmalloc_minimal.so 00:36:46.031 1 904 libcrypto.so 00:36:46.031 ----------------------------------------------------- 00:36:46.031 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:46.031 23:18:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:46.031 { 00:36:46.031 "subsystems": [ 00:36:46.031 { 00:36:46.031 "subsystem": "bdev", 00:36:46.031 "config": [ 00:36:46.031 { 00:36:46.031 "params": { 00:36:46.031 "io_mechanism": "io_uring", 00:36:46.031 "conserve_cpu": false, 00:36:46.031 "filename": "/dev/nvme0n1", 00:36:46.031 "name": "xnvme_bdev" 00:36:46.031 }, 00:36:46.031 "method": "bdev_xnvme_create" 00:36:46.031 }, 00:36:46.031 { 00:36:46.031 "method": "bdev_wait_for_examine" 00:36:46.031 } 00:36:46.031 ] 00:36:46.031 } 00:36:46.031 ] 00:36:46.031 } 00:36:46.292 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:46.292 fio-3.35 00:36:46.292 Starting 1 thread 00:36:52.882 00:36:52.882 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70254: Mon Dec 9 23:18:32 2024 00:36:52.882 write: IOPS=24.3k, BW=94.9MiB/s (99.6MB/s)(490MiB/5166msec); 0 zone resets 00:36:52.882 slat (nsec): min=2902, max=54603, avg=3833.70, stdev=1417.82 00:36:52.882 clat (usec): min=82, max=435963, avg=2487.81, stdev=19561.61 00:36:52.882 lat (usec): min=85, max=435966, avg=2491.64, stdev=19561.65 00:36:52.882 clat percentiles (usec): 00:36:52.882 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 725], 20.00th=[ 783], 00:36:52.882 | 30.00th=[ 824], 40.00th=[ 865], 50.00th=[ 906], 60.00th=[ 963], 00:36:52.882 | 70.00th=[ 1029], 80.00th=[ 1123], 90.00th=[ 1270], 95.00th=[ 1418], 00:36:52.882 | 99.00th=[ 2040], 99.50th=[143655], 99.90th=[350225], 99.95th=[429917], 00:36:52.882 | 99.99th=[434111] 00:36:52.882 bw ( KiB/s): min=11000, max=249344, per=100.00%, avg=100394.40, stdev=90425.17, samples=10 00:36:52.882 iops : min= 2750, max=62336, avg=25098.60, stdev=22606.29, samples=10 00:36:52.882 lat (usec) : 100=0.01%, 250=0.14%, 500=0.23%, 750=13.71%, 1000=51.84% 00:36:52.882 lat (msec) : 2=33.05%, 4=0.23%, 10=0.01%, 50=0.05%, 100=0.05% 00:36:52.882 lat (msec) : 250=0.47%, 500=0.22% 00:36:52.882 cpu : usr=28.94%, sys=70.49%, ctx=16, majf=0, minf=763 00:36:52.882 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.3%, 16=24.7%, 32=50.6%, >=64=1.7% 00:36:52.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.882 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:36:52.882 issued rwts: total=0,125557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.882 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:52.882 00:36:52.882 Run status group 0 (all jobs): 00:36:52.882 WRITE: bw=94.9MiB/s (99.6MB/s), 94.9MiB/s-94.9MiB/s (99.6MB/s-99.6MB/s), io=490MiB (514MB), run=5166-5166msec 00:36:53.144 ----------------------------------------------------- 00:36:53.144 Suppressions used: 00:36:53.144 count bytes template 00:36:53.144 1 11 /usr/src/fio/parse.c 00:36:53.144 1 8 libtcmalloc_minimal.so 00:36:53.144 1 904 libcrypto.so 00:36:53.144 ----------------------------------------------------- 00:36:53.144 00:36:53.144 00:36:53.144 real 0m13.782s 00:36:53.144 user 0m6.311s 00:36:53.144 sys 0m7.076s 00:36:53.144 ************************************ 00:36:53.144 23:18:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.144 23:18:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:53.144 END TEST xnvme_fio_plugin 00:36:53.144 ************************************ 00:36:53.144 23:18:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:53.144 23:18:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:36:53.144 23:18:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:36:53.144 23:18:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:53.144 23:18:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:53.144 23:18:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.144 23:18:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:53.144 ************************************ 00:36:53.144 START TEST xnvme_rpc 00:36:53.144 ************************************ 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70340 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70340 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70340 ']' 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:53.144 23:18:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:53.144 [2024-12-09 23:18:33.736129] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:53.144 [2024-12-09 23:18:33.736883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70340 ] 00:36:53.405 [2024-12-09 23:18:33.899724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.675 [2024-12-09 23:18:34.041704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.248 xnvme_bdev 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.248 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70340 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70340 ']' 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70340 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.518 23:18:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70340 00:36:54.518 killing process with pid 70340 00:36:54.519 23:18:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:54.519 23:18:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:54.519 23:18:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70340' 00:36:54.519 23:18:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70340 00:36:54.519 23:18:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70340 00:36:56.505 00:36:56.505 real 0m3.184s 00:36:56.505 user 0m3.096s 00:36:56.505 sys 0m0.554s 00:36:56.505 ************************************ 00:36:56.505 END TEST xnvme_rpc 00:36:56.505 ************************************ 00:36:56.505 23:18:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.505 23:18:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:56.505 23:18:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:56.505 23:18:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.505 23:18:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.505 23:18:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:56.506 ************************************ 00:36:56.506 START TEST xnvme_bdevperf 00:36:56.506 ************************************ 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:56.506 23:18:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.506 { 00:36:56.506 "subsystems": [ 00:36:56.506 { 00:36:56.506 "subsystem": "bdev", 00:36:56.506 "config": [ 00:36:56.506 { 00:36:56.506 "params": { 00:36:56.506 "io_mechanism": "io_uring", 00:36:56.506 "conserve_cpu": true, 00:36:56.506 "filename": "/dev/nvme0n1", 00:36:56.506 "name": "xnvme_bdev" 00:36:56.506 }, 00:36:56.506 "method": "bdev_xnvme_create" 00:36:56.506 }, 00:36:56.506 { 00:36:56.506 "method": "bdev_wait_for_examine" 00:36:56.506 } 00:36:56.506 ] 00:36:56.506 } 00:36:56.506 ] 00:36:56.506 } 00:36:56.506 [2024-12-09 23:18:36.988306] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:56.506 [2024-12-09 23:18:36.988457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ] 00:36:56.766 [2024-12-09 23:18:37.159935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.766 [2024-12-09 23:18:37.321524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.027 Running I/O for 5 seconds... 00:36:59.351 34845.00 IOPS, 136.11 MiB/s [2024-12-09T23:18:40.931Z] 34592.00 IOPS, 135.12 MiB/s [2024-12-09T23:18:41.891Z] 34578.00 IOPS, 135.07 MiB/s [2024-12-09T23:18:42.832Z] 36466.00 IOPS, 142.45 MiB/s [2024-12-09T23:18:42.832Z] 37138.60 IOPS, 145.07 MiB/s 00:37:02.196 Latency(us) 00:37:02.196 [2024-12-09T23:18:42.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.196 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:02.196 xnvme_bdev : 5.01 37097.15 144.91 0.00 0.00 1720.01 77.19 21173.17 00:37:02.196 [2024-12-09T23:18:42.832Z] =================================================================================================================== 00:37:02.196 [2024-12-09T23:18:42.832Z] Total : 37097.15 144.91 0.00 0.00 1720.01 77.19 21173.17 00:37:03.132 23:18:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:03.132 23:18:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:03.132 23:18:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:03.132 23:18:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:03.132 23:18:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.132 { 00:37:03.132 "subsystems": [ 00:37:03.132 { 00:37:03.132 "subsystem": "bdev", 00:37:03.132 "config": [ 00:37:03.132 { 00:37:03.132 "params": { 00:37:03.132 "io_mechanism": "io_uring", 00:37:03.132 "conserve_cpu": true, 00:37:03.132 "filename": "/dev/nvme0n1", 00:37:03.132 "name": "xnvme_bdev" 00:37:03.132 }, 00:37:03.132 "method": "bdev_xnvme_create" 00:37:03.132 }, 00:37:03.132 { 00:37:03.132 "method": "bdev_wait_for_examine" 00:37:03.132 } 00:37:03.132 ] 00:37:03.132 } 00:37:03.132 ] 00:37:03.132 } 00:37:03.132 [2024-12-09 23:18:43.605881] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:03.132 [2024-12-09 23:18:43.606632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70495 ] 00:37:03.391 [2024-12-09 23:18:43.771303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.391 [2024-12-09 23:18:43.865232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.649 Running I/O for 5 seconds... 00:37:05.609 8815.00 IOPS, 34.43 MiB/s [2024-12-09T23:18:47.179Z] 4408.00 IOPS, 17.22 MiB/s [2024-12-09T23:18:48.120Z] 6974.67 IOPS, 27.24 MiB/s [2024-12-09T23:18:49.505Z] 6519.75 IOPS, 25.47 MiB/s [2024-12-09T23:18:49.505Z] 6884.20 IOPS, 26.89 MiB/s 00:37:08.869 Latency(us) 00:37:08.869 [2024-12-09T23:18:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.869 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:08.869 xnvme_bdev : 5.02 6876.00 26.86 0.00 0.00 9289.79 64.98 1490591.11 00:37:08.869 [2024-12-09T23:18:49.505Z] =================================================================================================================== 00:37:08.869 [2024-12-09T23:18:49.505Z] Total : 6876.00 26.86 0.00 0.00 9289.79 64.98 1490591.11 00:37:09.441 00:37:09.441 real 0m13.076s 00:37:09.441 user 0m9.238s 00:37:09.441 sys 0m3.221s 00:37:09.441 23:18:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:09.441 ************************************ 00:37:09.441 END TEST xnvme_bdevperf 00:37:09.441 ************************************ 00:37:09.441 23:18:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.441 23:18:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:09.441 23:18:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:09.441 23:18:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:09.441 23:18:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:09.441 ************************************ 00:37:09.441 START TEST xnvme_fio_plugin 00:37:09.441 ************************************ 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:09.441 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:09.442 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:09.702 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:09.702 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:09.702 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:09.702 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:09.702 23:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:09.702 { 00:37:09.702 "subsystems": [ 00:37:09.702 { 00:37:09.702 "subsystem": "bdev", 00:37:09.702 "config": [ 00:37:09.702 { 00:37:09.702 "params": { 00:37:09.702 "io_mechanism": "io_uring", 00:37:09.702 "conserve_cpu": true, 00:37:09.702 "filename": "/dev/nvme0n1", 00:37:09.702 "name": "xnvme_bdev" 00:37:09.702 }, 00:37:09.702 "method": "bdev_xnvme_create" 00:37:09.702 }, 00:37:09.702 { 00:37:09.702 "method": "bdev_wait_for_examine" 00:37:09.702 } 00:37:09.702 ] 00:37:09.702 } 00:37:09.702 ] 00:37:09.702 } 00:37:09.702 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:09.702 fio-3.35 00:37:09.702 Starting 1 thread 00:37:16.292 00:37:16.292 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70614: Mon Dec 9 23:18:56 2024 00:37:16.292 read: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5001msec) 00:37:16.292 slat (nsec): min=2855, max=99588, avg=3972.29, stdev=1999.17 00:37:16.292 clat (usec): min=945, max=3647, avg=1619.18, stdev=241.80 00:37:16.292 lat (usec): min=949, max=3650, avg=1623.15, stdev=242.13 00:37:16.292 clat percentiles (usec): 00:37:16.292 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1434], 00:37:16.292 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1631], 00:37:16.292 | 70.00th=[ 1696], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2057], 00:37:16.292 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 3163], 99.95th=[ 3359], 00:37:16.292 | 99.99th=[ 3589] 00:37:16.292 bw ( KiB/s): min=135680, max=148480, per=99.97%, avg=143644.44, stdev=3772.08, samples=9 00:37:16.292 iops : min=33920, max=37120, avg=35911.11, stdev=943.02, samples=9 00:37:16.292 lat (usec) : 1000=0.01% 00:37:16.292 lat (msec) : 2=93.20%, 4=6.79% 00:37:16.292 cpu : usr=37.30%, sys=58.06%, ctx=14, majf=0, minf=762 00:37:16.292 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:16.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.292 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:37:16.292 issued rwts: total=179648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:16.292 00:37:16.292 Run status group 0 (all jobs): 00:37:16.292 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5001-5001msec 00:37:16.552 ----------------------------------------------------- 00:37:16.552 Suppressions used: 00:37:16.552 count bytes template 00:37:16.552 1 11 /usr/src/fio/parse.c 00:37:16.552 1 8 libtcmalloc_minimal.so 00:37:16.552 1 904 libcrypto.so 00:37:16.552 ----------------------------------------------------- 00:37:16.552 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:16.553 23:18:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:16.553 { 00:37:16.553 "subsystems": [ 00:37:16.553 { 00:37:16.553 "subsystem": "bdev", 00:37:16.553 "config": [ 00:37:16.553 { 00:37:16.553 "params": { 00:37:16.553 "io_mechanism": "io_uring", 00:37:16.553 "conserve_cpu": true, 00:37:16.553 "filename": "/dev/nvme0n1", 00:37:16.553 "name": "xnvme_bdev" 00:37:16.553 }, 00:37:16.553 "method": "bdev_xnvme_create" 00:37:16.553 }, 00:37:16.553 { 00:37:16.553 "method": "bdev_wait_for_examine" 00:37:16.553 } 00:37:16.553 ] 00:37:16.553 } 00:37:16.553 ] 00:37:16.553 } 00:37:16.814 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:16.814 fio-3.35 00:37:16.814 Starting 1 thread 00:37:23.399 00:37:23.399 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70706: Mon Dec 9 23:19:02 2024 00:37:23.399 write: IOPS=30.4k, BW=119MiB/s (124MB/s)(594MiB/5011msec); 0 zone resets 00:37:23.399 slat (usec): min=2, max=381, avg= 4.46, stdev= 3.13 00:37:23.399 clat (usec): min=58, max=38213, avg=1957.17, stdev=2613.76 00:37:23.399 lat (usec): min=61, max=38216, avg=1961.62, stdev=2613.86 00:37:23.399 clat percentiles (usec): 00:37:23.399 | 1.00th=[ 141], 5.00th=[ 338], 10.00th=[ 750], 20.00th=[ 1090], 00:37:23.399 | 30.00th=[ 1221], 40.00th=[ 1352], 50.00th=[ 1450], 60.00th=[ 1549], 00:37:23.399 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 2008], 95.00th=[ 7177], 00:37:23.399 | 99.00th=[14091], 99.50th=[14877], 99.90th=[17695], 99.95th=[19792], 00:37:23.399 | 99.99th=[35914] 00:37:23.399 bw ( KiB/s): min=56344, max=186368, per=100.00%, avg=121653.60, stdev=45928.50, samples=10 00:37:23.399 iops : min=14086, max=46592, avg=30413.40, stdev=11482.13, samples=10 00:37:23.399 lat (usec) : 100=0.21%, 250=2.93%, 500=4.11%, 750=2.73%, 1000=5.21% 00:37:23.399 lat (msec) : 2=74.54%, 4=4.64%, 10=1.11%, 20=4.48%, 50=0.05% 00:37:23.399 cpu : usr=52.50%, sys=40.92%, ctx=14, majf=0, minf=763 00:37:23.399 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.3%, 16=20.7%, 32=54.7%, >=64=5.3% 00:37:23.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.399 complete : 0=0.0%, 4=97.6%, 8=0.7%, 16=0.4%, 32=0.1%, 64=1.2%, >=64=0.0% 00:37:23.399 issued rwts: total=0,152120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:23.399 00:37:23.399 Run status group 0 (all jobs): 00:37:23.399 WRITE: bw=119MiB/s (124MB/s), 119MiB/s-119MiB/s (124MB/s-124MB/s), io=594MiB (623MB), run=5011-5011msec 00:37:23.399 ----------------------------------------------------- 00:37:23.399 Suppressions used: 00:37:23.399 count bytes template 00:37:23.399 1 11 /usr/src/fio/parse.c 00:37:23.399 1 8 libtcmalloc_minimal.so 00:37:23.399 1 904 libcrypto.so 00:37:23.399 ----------------------------------------------------- 00:37:23.399 00:37:23.399 00:37:23.399 real 0m13.887s 00:37:23.399 user 0m7.443s 00:37:23.399 sys 0m5.551s 00:37:23.399 ************************************ 00:37:23.399 END TEST xnvme_fio_plugin 00:37:23.399 ************************************ 00:37:23.399 23:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.399 23:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:37:23.399 23:19:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:23.399 23:19:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.399 23:19:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.399 23:19:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:23.399 ************************************ 00:37:23.399 START TEST xnvme_rpc 00:37:23.399 ************************************ 00:37:23.399 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:23.399 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:23.399 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:23.399 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:23.399 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70791 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70791 00:37:23.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70791 ']' 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:23.400 23:19:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:23.661 [2024-12-09 23:19:04.116113] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:23.661 [2024-12-09 23:19:04.116263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70791 ] 00:37:23.661 [2024-12-09 23:19:04.281352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.923 [2024-12-09 23:19:04.413226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.499 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.499 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:24.499 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:37:24.499 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.499 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 xnvme_bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70791 00:37:24.760 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70791 ']' 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70791 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70791 00:37:24.761 killing process with pid 70791 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70791' 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70791 00:37:24.761 23:19:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70791 00:37:26.676 00:37:26.676 real 0m2.958s 00:37:26.676 user 0m2.963s 00:37:26.676 sys 0m0.484s 00:37:26.676 ************************************ 00:37:26.676 END TEST xnvme_rpc 00:37:26.676 ************************************ 00:37:26.676 23:19:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.676 23:19:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 23:19:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:26.677 23:19:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:26.677 23:19:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.677 23:19:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 ************************************ 00:37:26.677 START TEST xnvme_bdevperf 00:37:26.677 ************************************ 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:26.677 23:19:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 { 00:37:26.677 "subsystems": [ 00:37:26.677 { 00:37:26.677 "subsystem": "bdev", 00:37:26.677 "config": [ 00:37:26.677 { 00:37:26.677 "params": { 00:37:26.677 "io_mechanism": "io_uring_cmd", 00:37:26.677 "conserve_cpu": false, 00:37:26.677 "filename": "/dev/ng0n1", 00:37:26.677 "name": "xnvme_bdev" 00:37:26.677 }, 00:37:26.677 "method": "bdev_xnvme_create" 00:37:26.677 }, 00:37:26.677 { 00:37:26.677 "method": "bdev_wait_for_examine" 00:37:26.677 } 00:37:26.677 ] 00:37:26.677 } 00:37:26.677 ] 00:37:26.677 } 00:37:26.677 [2024-12-09 23:19:07.135782] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:26.677 [2024-12-09 23:19:07.136172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:37:26.677 [2024-12-09 23:19:07.302150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.938 [2024-12-09 23:19:07.435880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.200 Running I/O for 5 seconds... 00:37:29.540 34689.00 IOPS, 135.50 MiB/s [2024-12-09T23:19:10.748Z] 34727.50 IOPS, 135.65 MiB/s [2024-12-09T23:19:12.135Z] 34298.33 IOPS, 133.98 MiB/s [2024-12-09T23:19:13.079Z] 35372.25 IOPS, 138.17 MiB/s [2024-12-09T23:19:13.079Z] 35479.40 IOPS, 138.59 MiB/s 00:37:32.443 Latency(us) 00:37:32.443 [2024-12-09T23:19:13.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.443 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:32.443 xnvme_bdev : 5.01 35452.73 138.49 0.00 0.00 1801.11 316.65 32868.82 00:37:32.443 [2024-12-09T23:19:13.079Z] =================================================================================================================== 00:37:32.443 [2024-12-09T23:19:13.079Z] Total : 35452.73 138.49 0.00 0.00 1801.11 316.65 32868.82 00:37:33.013 23:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:33.013 23:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:33.013 23:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:33.013 23:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:33.013 23:19:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.013 { 00:37:33.013 "subsystems": [ 00:37:33.013 { 00:37:33.013 "subsystem": "bdev", 00:37:33.013 "config": [ 00:37:33.013 { 00:37:33.013 "params": { 00:37:33.013 "io_mechanism": "io_uring_cmd", 00:37:33.013 "conserve_cpu": false, 00:37:33.013 "filename": "/dev/ng0n1", 00:37:33.013 "name": "xnvme_bdev" 00:37:33.013 }, 00:37:33.013 "method": "bdev_xnvme_create" 00:37:33.013 }, 00:37:33.013 { 00:37:33.013 "method": "bdev_wait_for_examine" 00:37:33.013 } 00:37:33.013 ] 00:37:33.013 } 00:37:33.013 ] 00:37:33.013 } 00:37:33.013 [2024-12-09 23:19:13.630091] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:33.013 [2024-12-09 23:19:13.630259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:37:33.274 [2024-12-09 23:19:13.797168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.535 [2024-12-09 23:19:13.914883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.797 Running I/O for 5 seconds... 00:37:35.677 12742.00 IOPS, 49.77 MiB/s [2024-12-09T23:19:17.255Z] 12673.50 IOPS, 49.51 MiB/s [2024-12-09T23:19:18.638Z] 12700.33 IOPS, 49.61 MiB/s [2024-12-09T23:19:19.582Z] 12801.50 IOPS, 50.01 MiB/s [2024-12-09T23:19:19.582Z] 12788.80 IOPS, 49.96 MiB/s 00:37:38.946 Latency(us) 00:37:38.946 [2024-12-09T23:19:19.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.946 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:38.946 xnvme_bdev : 5.01 12779.02 49.92 0.00 0.00 4999.27 55.93 20971.52 00:37:38.946 [2024-12-09T23:19:19.582Z] =================================================================================================================== 00:37:38.946 [2024-12-09T23:19:19.582Z] Total : 12779.02 49.92 0.00 0.00 4999.27 55.93 20971.52 00:37:39.521 23:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:39.521 23:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:37:39.521 23:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:39.521 23:19:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:39.521 23:19:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.521 { 00:37:39.521 "subsystems": [ 00:37:39.521 { 00:37:39.521 "subsystem": "bdev", 00:37:39.521 "config": [ 00:37:39.521 { 00:37:39.521 "params": { 00:37:39.521 "io_mechanism": "io_uring_cmd", 00:37:39.521 "conserve_cpu": false, 00:37:39.521 "filename": "/dev/ng0n1", 00:37:39.521 "name": "xnvme_bdev" 00:37:39.521 }, 00:37:39.521 "method": "bdev_xnvme_create" 00:37:39.521 }, 00:37:39.521 { 00:37:39.521 "method": "bdev_wait_for_examine" 00:37:39.521 } 00:37:39.521 ] 00:37:39.521 } 00:37:39.521 ] 00:37:39.521 } 00:37:39.521 [2024-12-09 23:19:20.139748] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:39.521 [2024-12-09 23:19:20.139920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71009 ] 00:37:39.788 [2024-12-09 23:19:20.311642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.050 [2024-12-09 23:19:20.448204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.310 Running I/O for 5 seconds... 00:37:42.199 71616.00 IOPS, 279.75 MiB/s [2024-12-09T23:19:23.776Z] 70944.00 IOPS, 277.12 MiB/s [2024-12-09T23:19:25.167Z] 71296.00 IOPS, 278.50 MiB/s [2024-12-09T23:19:26.113Z] 72368.00 IOPS, 282.69 MiB/s 00:37:45.477 Latency(us) 00:37:45.477 [2024-12-09T23:19:26.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.477 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:37:45.477 xnvme_bdev : 5.00 71929.91 280.98 0.00 0.00 885.97 513.58 2596.23 00:37:45.477 [2024-12-09T23:19:26.113Z] =================================================================================================================== 00:37:45.477 [2024-12-09T23:19:26.113Z] Total : 71929.91 280.98 0.00 0.00 885.97 513.58 2596.23 00:37:46.051 23:19:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:46.051 23:19:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:37:46.051 23:19:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:46.051 23:19:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:46.051 23:19:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:46.051 { 00:37:46.051 "subsystems": [ 00:37:46.051 { 00:37:46.051 "subsystem": "bdev", 00:37:46.051 "config": [ 00:37:46.051 { 00:37:46.051 "params": { 00:37:46.051 "io_mechanism": "io_uring_cmd", 00:37:46.051 "conserve_cpu": false, 00:37:46.051 "filename": "/dev/ng0n1", 00:37:46.051 "name": "xnvme_bdev" 00:37:46.051 }, 00:37:46.051 "method": "bdev_xnvme_create" 00:37:46.051 }, 00:37:46.051 { 00:37:46.051 "method": "bdev_wait_for_examine" 00:37:46.051 } 00:37:46.051 ] 00:37:46.051 } 00:37:46.051 ] 00:37:46.051 } 00:37:46.312 [2024-12-09 23:19:26.724674] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:46.312 [2024-12-09 23:19:26.725042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:37:46.312 [2024-12-09 23:19:26.895510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.573 [2024-12-09 23:19:27.049593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.834 Running I/O for 5 seconds... 00:37:49.162 138.00 IOPS, 0.54 MiB/s [2024-12-09T23:19:30.743Z] 139.00 IOPS, 0.54 MiB/s [2024-12-09T23:19:31.687Z] 156.67 IOPS, 0.61 MiB/s [2024-12-09T23:19:32.629Z] 152.75 IOPS, 0.60 MiB/s [2024-12-09T23:19:32.890Z] 151.20 IOPS, 0.59 MiB/s 00:37:52.254 Latency(us) 00:37:52.254 [2024-12-09T23:19:32.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.254 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:37:52.254 xnvme_bdev : 5.45 150.58 0.59 0.00 0.00 409050.11 485.22 806596.92 00:37:52.254 [2024-12-09T23:19:32.890Z] =================================================================================================================== 00:37:52.254 [2024-12-09T23:19:32.890Z] Total : 150.58 0.59 0.00 0.00 409050.11 485.22 806596.92 00:37:53.197 ************************************ 00:37:53.197 END TEST xnvme_bdevperf 00:37:53.197 ************************************ 00:37:53.197 00:37:53.197 real 0m26.671s 00:37:53.197 user 0m15.287s 00:37:53.197 sys 0m10.894s 00:37:53.197 23:19:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.197 23:19:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.197 23:19:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:53.197 23:19:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:53.197 23:19:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.197 23:19:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:53.197 ************************************ 00:37:53.197 START TEST xnvme_fio_plugin 00:37:53.197 ************************************ 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:53.197 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:53.459 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:53.459 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:53.459 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:53.459 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:53.459 23:19:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:53.459 { 00:37:53.459 "subsystems": [ 00:37:53.459 { 00:37:53.459 "subsystem": "bdev", 00:37:53.459 "config": [ 00:37:53.459 { 00:37:53.459 "params": { 00:37:53.459 "io_mechanism": "io_uring_cmd", 00:37:53.459 "conserve_cpu": false, 00:37:53.459 "filename": "/dev/ng0n1", 00:37:53.459 "name": "xnvme_bdev" 00:37:53.459 }, 00:37:53.459 "method": "bdev_xnvme_create" 00:37:53.459 }, 00:37:53.459 { 00:37:53.460 "method": "bdev_wait_for_examine" 00:37:53.460 } 00:37:53.460 ] 00:37:53.460 } 00:37:53.460 ] 00:37:53.460 } 00:37:53.460 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:53.460 fio-3.35 00:37:53.460 Starting 1 thread 00:38:00.059 00:38:00.059 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71209: Mon Dec 9 23:19:39 2024 00:38:00.059 read: IOPS=37.6k, BW=147MiB/s (154MB/s)(736MiB/5002msec) 00:38:00.059 slat (usec): min=2, max=236, avg= 3.84, stdev= 2.36 00:38:00.059 clat (usec): min=777, max=3400, avg=1543.51, stdev=321.42 00:38:00.059 lat (usec): min=780, max=3423, avg=1547.35, stdev=321.94 00:38:00.059 clat percentiles (usec): 00:38:00.059 | 1.00th=[ 979], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:38:00.059 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1516], 60.00th=[ 1598], 00:38:00.059 | 70.00th=[ 1680], 80.00th=[ 1795], 90.00th=[ 1958], 95.00th=[ 2114], 00:38:00.059 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3097], 99.95th=[ 3294], 00:38:00.059 | 99.99th=[ 3392] 00:38:00.059 bw ( KiB/s): min=133120, max=178688, per=100.00%, avg=151239.11, stdev=14569.40, samples=9 00:38:00.059 iops : min=33280, max=44672, avg=37809.78, stdev=3642.35, samples=9 00:38:00.059 lat (usec) : 1000=1.50% 00:38:00.059 lat (msec) : 2=90.39%, 4=8.12% 00:38:00.059 cpu : usr=33.63%, sys=64.35%, ctx=39, majf=0, minf=762 00:38:00.059 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:38:00.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.059 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:38:00.059 issued rwts: total=188320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:00.059 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:00.059 00:38:00.059 Run status group 0 (all jobs): 00:38:00.059 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=736MiB (771MB), run=5002-5002msec 00:38:00.059 ----------------------------------------------------- 00:38:00.059 Suppressions used: 00:38:00.059 count bytes template 00:38:00.059 1 11 /usr/src/fio/parse.c 00:38:00.059 1 8 libtcmalloc_minimal.so 00:38:00.059 1 904 libcrypto.so 00:38:00.059 ----------------------------------------------------- 00:38:00.059 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:00.059 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:00.060 23:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:00.060 { 00:38:00.060 "subsystems": [ 00:38:00.060 { 00:38:00.060 "subsystem": "bdev", 00:38:00.060 "config": [ 00:38:00.060 { 00:38:00.060 "params": { 00:38:00.060 "io_mechanism": "io_uring_cmd", 00:38:00.060 "conserve_cpu": false, 00:38:00.060 "filename": "/dev/ng0n1", 00:38:00.060 "name": "xnvme_bdev" 00:38:00.060 }, 00:38:00.060 "method": "bdev_xnvme_create" 00:38:00.060 }, 00:38:00.060 { 00:38:00.060 "method": "bdev_wait_for_examine" 00:38:00.060 } 00:38:00.060 ] 00:38:00.060 } 00:38:00.060 ] 00:38:00.060 } 00:38:00.060 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:00.060 fio-3.35 00:38:00.060 Starting 1 thread 00:38:06.663 00:38:06.663 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71299: Mon Dec 9 23:19:46 2024 00:38:06.663 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5009msec); 0 zone resets 00:38:06.663 slat (usec): min=2, max=130, avg= 4.10, stdev= 2.05 00:38:06.663 clat (usec): min=61, max=27772, avg=1756.04, stdev=1997.57 00:38:06.663 lat (usec): min=65, max=27776, avg=1760.14, stdev=1997.68 00:38:06.663 clat percentiles (usec): 00:38:06.663 | 1.00th=[ 437], 5.00th=[ 824], 10.00th=[ 996], 20.00th=[ 1139], 00:38:06.663 | 30.00th=[ 1237], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1467], 00:38:06.663 | 70.00th=[ 1565], 80.00th=[ 1696], 90.00th=[ 1893], 95.00th=[ 2311], 00:38:06.663 | 99.00th=[13304], 99.50th=[14353], 99.90th=[16188], 99.95th=[17433], 00:38:06.663 | 99.99th=[25297] 00:38:06.663 bw ( KiB/s): min=51288, max=180768, per=100.00%, avg=134867.20, stdev=44359.40, samples=10 00:38:06.663 iops : min=12822, max=45192, avg=33716.80, stdev=11089.85, samples=10 00:38:06.663 lat (usec) : 100=0.01%, 250=0.28%, 500=1.11%, 750=2.14%, 1000=6.81% 00:38:06.663 lat (msec) : 2=82.31%, 4=3.43%, 10=1.40%, 20=2.48%, 50=0.04% 00:38:06.663 cpu : usr=35.96%, sys=62.82%, ctx=10, majf=0, minf=763 00:38:06.663 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.5%, 16=22.0%, 32=55.5%, >=64=3.0% 00:38:06.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.663 complete : 0=0.0%, 4=97.9%, 8=0.3%, 16=0.3%, 32=0.2%, 64=1.4%, >=64=0.0% 00:38:06.663 issued rwts: total=0,168642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:06.663 00:38:06.663 Run status group 0 (all jobs): 00:38:06.663 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5009-5009msec 00:38:06.925 ----------------------------------------------------- 00:38:06.925 Suppressions used: 00:38:06.925 count bytes template 00:38:06.925 1 11 /usr/src/fio/parse.c 00:38:06.925 1 8 libtcmalloc_minimal.so 00:38:06.925 1 904 libcrypto.so 00:38:06.925 ----------------------------------------------------- 00:38:06.925 00:38:06.925 00:38:06.925 real 0m13.525s 00:38:06.925 user 0m6.169s 00:38:06.925 sys 0m6.882s 00:38:06.925 23:19:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.925 ************************************ 00:38:06.925 END TEST xnvme_fio_plugin 00:38:06.925 ************************************ 00:38:06.925 23:19:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:06.925 23:19:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:38:06.925 23:19:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:38:06.925 23:19:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:38:06.925 23:19:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:38:06.925 23:19:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:06.925 23:19:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.925 23:19:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:06.925 ************************************ 00:38:06.925 START TEST xnvme_rpc 00:38:06.925 ************************************ 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71387 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71387 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71387 ']' 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:06.925 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.926 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.926 23:19:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:06.926 [2024-12-09 23:19:47.500368] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:06.926 [2024-12-09 23:19:47.500546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71387 ] 00:38:07.186 [2024-12-09 23:19:47.668289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.186 [2024-12-09 23:19:47.796868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 xnvme_bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71387 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71387 ']' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71387 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71387 00:38:08.127 killing process with pid 71387 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71387' 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71387 00:38:08.127 23:19:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71387 00:38:10.039 ************************************ 00:38:10.039 END TEST xnvme_rpc 00:38:10.039 ************************************ 00:38:10.039 00:38:10.039 real 0m2.962s 00:38:10.039 user 0m2.952s 00:38:10.039 sys 0m0.493s 00:38:10.039 23:19:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:10.039 23:19:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:10.039 23:19:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:38:10.039 23:19:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:10.039 23:19:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:10.039 23:19:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:10.039 ************************************ 00:38:10.039 START TEST xnvme_bdevperf 00:38:10.039 ************************************ 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:10.039 23:19:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.039 { 00:38:10.039 "subsystems": [ 00:38:10.039 { 00:38:10.039 "subsystem": "bdev", 00:38:10.039 "config": [ 00:38:10.039 { 00:38:10.039 "params": { 00:38:10.039 "io_mechanism": "io_uring_cmd", 00:38:10.039 "conserve_cpu": true, 00:38:10.039 "filename": "/dev/ng0n1", 00:38:10.039 "name": "xnvme_bdev" 00:38:10.039 }, 00:38:10.039 "method": "bdev_xnvme_create" 00:38:10.039 }, 00:38:10.039 { 00:38:10.039 "method": "bdev_wait_for_examine" 00:38:10.039 } 00:38:10.039 ] 00:38:10.039 } 00:38:10.039 ] 00:38:10.039 } 00:38:10.039 [2024-12-09 23:19:50.505953] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:10.039 [2024-12-09 23:19:50.506115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71457 ] 00:38:10.039 [2024-12-09 23:19:50.669638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.300 [2024-12-09 23:19:50.803451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.560 Running I/O for 5 seconds... 00:38:12.870 37640.00 IOPS, 147.03 MiB/s [2024-12-09T23:19:54.442Z] 41793.00 IOPS, 163.25 MiB/s [2024-12-09T23:19:55.384Z] 43733.67 IOPS, 170.83 MiB/s [2024-12-09T23:19:56.323Z] 43472.25 IOPS, 169.81 MiB/s 00:38:15.687 Latency(us) 00:38:15.687 [2024-12-09T23:19:56.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.687 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:15.687 xnvme_bdev : 5.00 43278.94 169.06 0.00 0.00 1474.56 604.95 7108.14 00:38:15.687 [2024-12-09T23:19:56.323Z] =================================================================================================================== 00:38:15.687 [2024-12-09T23:19:56.323Z] Total : 43278.94 169.06 0.00 0.00 1474.56 604.95 7108.14 00:38:16.272 23:19:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:16.272 23:19:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:16.272 23:19:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:16.272 23:19:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:16.272 23:19:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:16.272 { 00:38:16.272 "subsystems": [ 00:38:16.272 { 00:38:16.272 "subsystem": "bdev", 00:38:16.272 "config": [ 00:38:16.272 { 00:38:16.272 "params": { 00:38:16.272 "io_mechanism": "io_uring_cmd", 00:38:16.272 "conserve_cpu": true, 00:38:16.272 "filename": "/dev/ng0n1", 00:38:16.272 "name": "xnvme_bdev" 00:38:16.272 }, 00:38:16.272 "method": "bdev_xnvme_create" 00:38:16.272 }, 00:38:16.272 { 00:38:16.272 "method": "bdev_wait_for_examine" 00:38:16.272 } 00:38:16.272 ] 00:38:16.272 } 00:38:16.272 ] 00:38:16.272 } 00:38:16.533 [2024-12-09 23:19:56.917009] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:16.533 [2024-12-09 23:19:56.917146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71531 ] 00:38:16.533 [2024-12-09 23:19:57.083295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.794 [2024-12-09 23:19:57.217344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.056 Running I/O for 5 seconds... 00:38:18.944 35244.00 IOPS, 137.67 MiB/s [2024-12-09T23:20:00.523Z] 36441.50 IOPS, 142.35 MiB/s [2024-12-09T23:20:01.906Z] 37201.00 IOPS, 145.32 MiB/s [2024-12-09T23:20:02.848Z] 37954.25 IOPS, 148.26 MiB/s 00:38:22.212 Latency(us) 00:38:22.212 [2024-12-09T23:20:02.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.212 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:22.212 xnvme_bdev : 5.00 38838.31 151.71 0.00 0.00 1642.92 636.46 8469.27 00:38:22.212 [2024-12-09T23:20:02.848Z] =================================================================================================================== 00:38:22.212 [2024-12-09T23:20:02.848Z] Total : 38838.31 151.71 0.00 0.00 1642.92 636.46 8469.27 00:38:22.783 23:20:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:22.783 23:20:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:38:22.783 23:20:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:22.783 23:20:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:22.783 23:20:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:23.044 { 00:38:23.044 "subsystems": [ 00:38:23.044 { 00:38:23.044 "subsystem": "bdev", 00:38:23.044 "config": [ 00:38:23.044 { 00:38:23.044 "params": { 00:38:23.044 "io_mechanism": "io_uring_cmd", 00:38:23.044 "conserve_cpu": true, 00:38:23.044 "filename": "/dev/ng0n1", 00:38:23.044 "name": "xnvme_bdev" 00:38:23.044 }, 00:38:23.044 "method": "bdev_xnvme_create" 00:38:23.044 }, 00:38:23.044 { 00:38:23.044 "method": "bdev_wait_for_examine" 00:38:23.044 } 00:38:23.044 ] 00:38:23.044 } 00:38:23.044 ] 00:38:23.044 } 00:38:23.044 [2024-12-09 23:20:03.476231] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:23.044 [2024-12-09 23:20:03.476380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71612 ] 00:38:23.044 [2024-12-09 23:20:03.643202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.303 [2024-12-09 23:20:03.789144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.565 Running I/O for 5 seconds... 00:38:25.895 78848.00 IOPS, 308.00 MiB/s [2024-12-09T23:20:07.492Z] 79200.00 IOPS, 309.38 MiB/s [2024-12-09T23:20:08.439Z] 79253.33 IOPS, 309.58 MiB/s [2024-12-09T23:20:09.400Z] 79776.00 IOPS, 311.62 MiB/s 00:38:28.764 Latency(us) 00:38:28.764 [2024-12-09T23:20:09.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.764 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:38:28.764 xnvme_bdev : 5.00 78817.91 307.88 0.00 0.00 808.56 403.30 2772.68 00:38:28.764 [2024-12-09T23:20:09.400Z] =================================================================================================================== 00:38:28.764 [2024-12-09T23:20:09.400Z] Total : 78817.91 307.88 0.00 0.00 808.56 403.30 2772.68 00:38:29.336 23:20:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:29.336 23:20:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:38:29.336 23:20:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:29.336 23:20:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:29.336 23:20:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:29.599 { 00:38:29.599 "subsystems": [ 00:38:29.599 { 00:38:29.599 "subsystem": "bdev", 00:38:29.599 "config": [ 00:38:29.599 { 00:38:29.599 "params": { 00:38:29.599 "io_mechanism": "io_uring_cmd", 00:38:29.599 "conserve_cpu": true, 00:38:29.599 "filename": "/dev/ng0n1", 00:38:29.599 "name": "xnvme_bdev" 00:38:29.599 }, 00:38:29.599 "method": "bdev_xnvme_create" 00:38:29.599 }, 00:38:29.599 { 00:38:29.599 "method": "bdev_wait_for_examine" 00:38:29.599 } 00:38:29.599 ] 00:38:29.599 } 00:38:29.599 ] 00:38:29.599 } 00:38:29.599 [2024-12-09 23:20:10.039475] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:29.599 [2024-12-09 23:20:10.039612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71686 ] 00:38:29.599 [2024-12-09 23:20:10.204784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.864 [2024-12-09 23:20:10.361260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.125 Running I/O for 5 seconds... 00:38:32.445 40780.00 IOPS, 159.30 MiB/s [2024-12-09T23:20:14.015Z] 37978.00 IOPS, 148.35 MiB/s [2024-12-09T23:20:14.950Z] 33598.33 IOPS, 131.24 MiB/s [2024-12-09T23:20:15.909Z] 31595.50 IOPS, 123.42 MiB/s [2024-12-09T23:20:15.909Z] 30073.00 IOPS, 117.47 MiB/s 00:38:35.273 Latency(us) 00:38:35.273 [2024-12-09T23:20:15.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.273 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:38:35.273 xnvme_bdev : 5.01 30034.24 117.32 0.00 0.00 2124.85 67.74 124215.93 00:38:35.273 [2024-12-09T23:20:15.909Z] =================================================================================================================== 00:38:35.273 [2024-12-09T23:20:15.909Z] Total : 30034.24 117.32 0.00 0.00 2124.85 67.74 124215.93 00:38:35.841 00:38:35.841 real 0m26.038s 00:38:35.841 user 0m17.696s 00:38:35.841 sys 0m6.754s 00:38:35.841 ************************************ 00:38:35.841 END TEST xnvme_bdevperf 00:38:35.841 ************************************ 00:38:35.841 23:20:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.841 23:20:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:36.100 23:20:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:36.100 23:20:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:36.100 23:20:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.100 23:20:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:36.100 ************************************ 00:38:36.100 START TEST xnvme_fio_plugin 00:38:36.100 ************************************ 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:36.100 23:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:36.100 { 00:38:36.100 "subsystems": [ 00:38:36.100 { 00:38:36.100 "subsystem": "bdev", 00:38:36.100 "config": [ 00:38:36.100 { 00:38:36.100 "params": { 00:38:36.100 "io_mechanism": "io_uring_cmd", 00:38:36.100 "conserve_cpu": true, 00:38:36.100 "filename": "/dev/ng0n1", 00:38:36.100 "name": "xnvme_bdev" 00:38:36.100 }, 00:38:36.100 "method": "bdev_xnvme_create" 00:38:36.100 }, 00:38:36.100 { 00:38:36.100 "method": "bdev_wait_for_examine" 00:38:36.100 } 00:38:36.100 ] 00:38:36.100 } 00:38:36.100 ] 00:38:36.100 } 00:38:36.361 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:36.361 fio-3.35 00:38:36.361 Starting 1 thread 00:38:42.947 00:38:42.947 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71799: Mon Dec 9 23:20:22 2024 00:38:42.947 read: IOPS=40.4k, BW=158MiB/s (165MB/s)(789MiB/5002msec) 00:38:42.947 slat (nsec): min=2886, max=79598, avg=3838.16, stdev=2133.32 00:38:42.947 clat (usec): min=688, max=3562, avg=1432.69, stdev=282.06 00:38:42.947 lat (usec): min=691, max=3566, avg=1436.52, stdev=282.60 00:38:42.947 clat percentiles (usec): 00:38:42.947 | 1.00th=[ 914], 5.00th=[ 1045], 10.00th=[ 1106], 20.00th=[ 1205], 00:38:42.947 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1401], 60.00th=[ 1467], 00:38:42.947 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1795], 95.00th=[ 1942], 00:38:42.947 | 99.00th=[ 2278], 99.50th=[ 2442], 99.90th=[ 2835], 99.95th=[ 2966], 00:38:42.947 | 99.99th=[ 3490] 00:38:42.947 bw ( KiB/s): min=148480, max=176128, per=100.00%, avg=163237.44, stdev=8464.06, samples=9 00:38:42.947 iops : min=37120, max=44032, avg=40809.33, stdev=2116.06, samples=9 00:38:42.947 lat (usec) : 750=0.05%, 1000=3.02% 00:38:42.947 lat (msec) : 2=93.21%, 4=3.72% 00:38:42.947 cpu : usr=56.15%, sys=40.55%, ctx=14, majf=0, minf=762 00:38:42.947 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:38:42.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.947 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:38:42.947 issued rwts: total=201856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.947 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:42.947 00:38:42.947 Run status group 0 (all jobs): 00:38:42.947 READ: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=789MiB (827MB), run=5002-5002msec 00:38:42.947 ----------------------------------------------------- 00:38:42.947 Suppressions used: 00:38:42.947 count bytes template 00:38:42.947 1 11 /usr/src/fio/parse.c 00:38:42.947 1 8 libtcmalloc_minimal.so 00:38:42.947 1 904 libcrypto.so 00:38:42.947 ----------------------------------------------------- 00:38:42.947 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:42.947 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:42.948 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:42.948 23:20:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:42.948 { 00:38:42.948 "subsystems": [ 00:38:42.948 { 00:38:42.948 "subsystem": "bdev", 00:38:42.948 "config": [ 00:38:42.948 { 00:38:42.948 "params": { 00:38:42.948 "io_mechanism": "io_uring_cmd", 00:38:42.948 "conserve_cpu": true, 00:38:42.948 "filename": "/dev/ng0n1", 00:38:42.948 "name": "xnvme_bdev" 00:38:42.948 }, 00:38:42.948 "method": "bdev_xnvme_create" 00:38:42.948 }, 00:38:42.948 { 00:38:42.948 "method": "bdev_wait_for_examine" 00:38:42.948 } 00:38:42.948 ] 00:38:42.948 } 00:38:42.948 ] 00:38:42.948 } 00:38:43.206 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:43.206 fio-3.35 00:38:43.206 Starting 1 thread 00:38:49.777 00:38:49.777 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71890: Mon Dec 9 23:20:29 2024 00:38:49.777 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(541MiB/5001msec); 0 zone resets 00:38:49.777 slat (nsec): min=2904, max=93312, avg=3756.82, stdev=2013.20 00:38:49.777 clat (usec): min=55, max=28728, avg=2179.71, stdev=3612.44 00:38:49.777 lat (usec): min=59, max=28731, avg=2183.47, stdev=3612.62 00:38:49.777 clat percentiles (usec): 00:38:49.777 | 1.00th=[ 314], 5.00th=[ 889], 10.00th=[ 1012], 20.00th=[ 1123], 00:38:49.777 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1319], 00:38:49.777 | 70.00th=[ 1385], 80.00th=[ 1483], 90.00th=[ 1729], 95.00th=[12911], 00:38:49.777 | 99.00th=[18744], 99.50th=[20317], 99.90th=[23725], 99.95th=[24773], 00:38:49.777 | 99.99th=[26870] 00:38:49.777 bw ( KiB/s): min=25336, max=188920, per=94.36%, avg=104578.67, stdev=77152.93, samples=9 00:38:49.777 iops : min= 6334, max=47230, avg=26144.67, stdev=19288.23, samples=9 00:38:49.777 lat (usec) : 100=0.14%, 250=0.65%, 500=1.55%, 750=1.47%, 1000=5.54% 00:38:49.777 lat (msec) : 2=83.24%, 4=0.85%, 10=0.39%, 20=5.61%, 50=0.58% 00:38:49.777 cpu : usr=78.36%, sys=17.58%, ctx=21, majf=0, minf=763 00:38:49.777 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.1%, 16=22.2%, 32=52.5%, >=64=4.6% 00:38:49.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.777 complete : 0=0.0%, 4=97.9%, 8=0.6%, 16=0.2%, 32=0.1%, 64=1.3%, >=64=0.0% 00:38:49.777 issued rwts: total=0,138565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.777 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:49.777 00:38:49.777 Run status group 0 (all jobs): 00:38:49.777 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=541MiB (568MB), run=5001-5001msec 00:38:49.777 ----------------------------------------------------- 00:38:49.777 Suppressions used: 00:38:49.777 count bytes template 00:38:49.777 1 11 /usr/src/fio/parse.c 00:38:49.777 1 8 libtcmalloc_minimal.so 00:38:49.777 1 904 libcrypto.so 00:38:49.777 ----------------------------------------------------- 00:38:49.777 00:38:49.777 ************************************ 00:38:49.777 END TEST xnvme_fio_plugin 00:38:49.777 ************************************ 00:38:49.777 00:38:49.777 real 0m13.676s 00:38:49.777 user 0m9.445s 00:38:49.777 sys 0m3.536s 00:38:49.777 23:20:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.777 23:20:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:49.777 23:20:30 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71387 00:38:49.777 23:20:30 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71387 ']' 00:38:49.777 Process with pid 71387 is not found 00:38:49.777 23:20:30 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71387 00:38:49.777 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71387) - No such process 00:38:49.777 23:20:30 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71387 is not found' 00:38:49.777 00:38:49.777 real 3m31.279s 00:38:49.777 user 2m8.017s 00:38:49.777 sys 1m9.331s 00:38:49.777 23:20:30 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.777 23:20:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.777 ************************************ 00:38:49.777 END TEST nvme_xnvme 00:38:49.777 ************************************ 00:38:49.777 23:20:30 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:38:49.777 23:20:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:49.777 23:20:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.777 23:20:30 -- common/autotest_common.sh@10 -- # set +x 00:38:49.777 ************************************ 00:38:49.777 START TEST blockdev_xnvme 00:38:49.777 ************************************ 00:38:49.777 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:38:49.777 * Looking for test storage... 00:38:50.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:38:50.038 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:50.038 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:50.038 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:38:50.038 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:38:50.038 23:20:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.039 23:20:30 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:50.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.039 --rc genhtml_branch_coverage=1 00:38:50.039 --rc genhtml_function_coverage=1 00:38:50.039 --rc genhtml_legend=1 00:38:50.039 --rc geninfo_all_blocks=1 00:38:50.039 --rc geninfo_unexecuted_blocks=1 00:38:50.039 00:38:50.039 ' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:50.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.039 --rc genhtml_branch_coverage=1 00:38:50.039 --rc genhtml_function_coverage=1 00:38:50.039 --rc genhtml_legend=1 00:38:50.039 --rc geninfo_all_blocks=1 00:38:50.039 --rc geninfo_unexecuted_blocks=1 00:38:50.039 00:38:50.039 ' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:50.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.039 --rc genhtml_branch_coverage=1 00:38:50.039 --rc genhtml_function_coverage=1 00:38:50.039 --rc genhtml_legend=1 00:38:50.039 --rc geninfo_all_blocks=1 00:38:50.039 --rc geninfo_unexecuted_blocks=1 00:38:50.039 00:38:50.039 ' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:50.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.039 --rc genhtml_branch_coverage=1 00:38:50.039 --rc genhtml_function_coverage=1 00:38:50.039 --rc genhtml_legend=1 00:38:50.039 --rc geninfo_all_blocks=1 00:38:50.039 --rc geninfo_unexecuted_blocks=1 00:38:50.039 00:38:50.039 ' 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72024 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72024 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72024 ']' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.039 23:20:30 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.039 23:20:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:50.039 [2024-12-09 23:20:30.600210] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:50.039 [2024-12-09 23:20:30.600592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72024 ] 00:38:50.301 [2024-12-09 23:20:30.765264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.301 [2024-12-09 23:20:30.900015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.247 23:20:31 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.247 23:20:31 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:38:51.247 23:20:31 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:38:51.247 23:20:31 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:38:51.247 23:20:31 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:38:51.247 23:20:31 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:38:51.247 23:20:31 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:51.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:52.081 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:52.081 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:52.081 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:52.081 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:52.081 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:38:52.081 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:38:52.343 nvme0n1 00:38:52.343 nvme0n2 00:38:52.343 nvme0n3 00:38:52.343 nvme1n1 00:38:52.343 nvme2n1 00:38:52.343 nvme3n1 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:52.343 23:20:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:38:52.343 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5a3a9197-6722-48fe-9725-2a4684dddac9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5a3a9197-6722-48fe-9725-2a4684dddac9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9c207bd5-50cb-449b-8545-a279f1089e03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c207bd5-50cb-449b-8545-a279f1089e03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4adedc41-81b1-4c54-8c5d-4ceffb33b634"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4adedc41-81b1-4c54-8c5d-4ceffb33b634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "259165c7-77e6-4798-a695-09ee3f69365d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "259165c7-77e6-4798-a695-09ee3f69365d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4f001de0-e2bf-40ab-9609-72f01483e3c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4f001de0-e2bf-40ab-9609-72f01483e3c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b7127b02-02b3-4d07-b645-adf3c056f606"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b7127b02-02b3-4d07-b645-adf3c056f606",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:38:52.344 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:38:52.344 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:38:52.344 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:38:52.344 23:20:32 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72024 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72024 ']' 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72024 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72024 00:38:52.344 killing process with pid 72024 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72024' 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72024 00:38:52.344 23:20:32 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72024 00:38:54.259 23:20:34 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:54.259 23:20:34 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:38:54.259 23:20:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:54.259 23:20:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.259 23:20:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:54.259 ************************************ 00:38:54.259 START TEST bdev_hello_world 00:38:54.259 ************************************ 00:38:54.259 23:20:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:38:54.259 [2024-12-09 23:20:34.751677] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:54.259 [2024-12-09 23:20:34.751832] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72308 ] 00:38:54.521 [2024-12-09 23:20:34.909186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.521 [2024-12-09 23:20:35.037583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.095 [2024-12-09 23:20:35.462753] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:38:55.095 [2024-12-09 23:20:35.463024] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:38:55.095 [2024-12-09 23:20:35.463055] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:38:55.095 [2024-12-09 23:20:35.465270] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:38:55.095 [2024-12-09 23:20:35.466357] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:38:55.095 [2024-12-09 23:20:35.466565] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:38:55.095 [2024-12-09 23:20:35.467076] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:38:55.095 00:38:55.095 [2024-12-09 23:20:35.467116] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:38:55.672 ************************************ 00:38:55.672 END TEST bdev_hello_world 00:38:55.672 ************************************ 00:38:55.672 00:38:55.672 real 0m1.587s 00:38:55.672 user 0m1.188s 00:38:55.672 sys 0m0.242s 00:38:55.673 23:20:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:55.673 23:20:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:38:55.939 23:20:36 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:38:55.939 23:20:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:55.939 23:20:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.939 23:20:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:55.939 ************************************ 00:38:55.939 START TEST bdev_bounds 00:38:55.939 ************************************ 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72346 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:55.939 Process bdevio pid: 72346 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72346' 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72346 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72346 ']' 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.939 23:20:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:55.939 [2024-12-09 23:20:36.414120] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:55.939 [2024-12-09 23:20:36.414481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72346 ] 00:38:56.201 [2024-12-09 23:20:36.584940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:56.201 [2024-12-09 23:20:36.721595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.201 [2024-12-09 23:20:36.722403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:56.201 [2024-12-09 23:20:36.722537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.773 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:56.773 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:38:56.773 23:20:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:56.773 I/O targets: 00:38:56.773 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:38:56.773 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:38:56.773 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:38:56.773 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:38:56.773 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:38:56.773 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:38:56.773 00:38:56.773 00:38:56.773 CUnit - A unit testing framework for C - Version 2.1-3 00:38:56.773 http://cunit.sourceforge.net/ 00:38:56.773 00:38:56.773 00:38:56.773 Suite: bdevio tests on: nvme3n1 00:38:56.773 Test: blockdev write read block ...passed 00:38:56.773 Test: blockdev write zeroes read block ...passed 00:38:56.773 Test: blockdev write zeroes read no split ...passed 00:38:57.035 Test: blockdev write zeroes read split ...passed 00:38:57.035 Test: blockdev write zeroes read split partial ...passed 00:38:57.035 Test: blockdev reset ...passed 00:38:57.035 Test: blockdev write read 8 blocks ...passed 00:38:57.035 Test: blockdev write read size > 128k ...passed 00:38:57.035 Test: blockdev write read invalid size ...passed 00:38:57.035 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.035 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.035 Test: blockdev write read max offset ...passed 00:38:57.035 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.035 Test: blockdev writev readv 8 blocks ...passed 00:38:57.035 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.035 Test: blockdev writev readv block ...passed 00:38:57.035 Test: blockdev writev readv size > 128k ...passed 00:38:57.035 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.035 Test: blockdev comparev and writev ...passed 00:38:57.035 Test: blockdev nvme passthru rw ...passed 00:38:57.035 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.035 Test: blockdev nvme admin passthru ...passed 00:38:57.035 Test: blockdev copy ...passed 00:38:57.035 Suite: bdevio tests on: nvme2n1 00:38:57.035 Test: blockdev write read block ...passed 00:38:57.035 Test: blockdev write zeroes read block ...passed 00:38:57.035 Test: blockdev write zeroes read no split ...passed 00:38:57.035 Test: blockdev write zeroes read split ...passed 00:38:57.035 Test: blockdev write zeroes read split partial ...passed 00:38:57.035 Test: blockdev reset ...passed 00:38:57.035 Test: blockdev write read 8 blocks ...passed 00:38:57.035 Test: blockdev write read size > 128k ...passed 00:38:57.035 Test: blockdev write read invalid size ...passed 00:38:57.035 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.035 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.035 Test: blockdev write read max offset ...passed 00:38:57.035 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.035 Test: blockdev writev readv 8 blocks ...passed 00:38:57.035 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.035 Test: blockdev writev readv block ...passed 00:38:57.035 Test: blockdev writev readv size > 128k ...passed 00:38:57.035 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.035 Test: blockdev comparev and writev ...passed 00:38:57.035 Test: blockdev nvme passthru rw ...passed 00:38:57.035 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.035 Test: blockdev nvme admin passthru ...passed 00:38:57.035 Test: blockdev copy ...passed 00:38:57.035 Suite: bdevio tests on: nvme1n1 00:38:57.035 Test: blockdev write read block ...passed 00:38:57.035 Test: blockdev write zeroes read block ...passed 00:38:57.035 Test: blockdev write zeroes read no split ...passed 00:38:57.035 Test: blockdev write zeroes read split ...passed 00:38:57.035 Test: blockdev write zeroes read split partial ...passed 00:38:57.035 Test: blockdev reset ...passed 00:38:57.035 Test: blockdev write read 8 blocks ...passed 00:38:57.035 Test: blockdev write read size > 128k ...passed 00:38:57.035 Test: blockdev write read invalid size ...passed 00:38:57.035 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.035 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.035 Test: blockdev write read max offset ...passed 00:38:57.036 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.036 Test: blockdev writev readv 8 blocks ...passed 00:38:57.036 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.036 Test: blockdev writev readv block ...passed 00:38:57.036 Test: blockdev writev readv size > 128k ...passed 00:38:57.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.036 Test: blockdev comparev and writev ...passed 00:38:57.036 Test: blockdev nvme passthru rw ...passed 00:38:57.036 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.036 Test: blockdev nvme admin passthru ...passed 00:38:57.036 Test: blockdev copy ...passed 00:38:57.036 Suite: bdevio tests on: nvme0n3 00:38:57.036 Test: blockdev write read block ...passed 00:38:57.036 Test: blockdev write zeroes read block ...passed 00:38:57.036 Test: blockdev write zeroes read no split ...passed 00:38:57.298 Test: blockdev write zeroes read split ...passed 00:38:57.298 Test: blockdev write zeroes read split partial ...passed 00:38:57.298 Test: blockdev reset ...passed 00:38:57.298 Test: blockdev write read 8 blocks ...passed 00:38:57.298 Test: blockdev write read size > 128k ...passed 00:38:57.298 Test: blockdev write read invalid size ...passed 00:38:57.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.298 Test: blockdev write read max offset ...passed 00:38:57.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.298 Test: blockdev writev readv 8 blocks ...passed 00:38:57.298 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.298 Test: blockdev writev readv block ...passed 00:38:57.298 Test: blockdev writev readv size > 128k ...passed 00:38:57.298 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.298 Test: blockdev comparev and writev ...passed 00:38:57.298 Test: blockdev nvme passthru rw ...passed 00:38:57.298 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.298 Test: blockdev nvme admin passthru ...passed 00:38:57.298 Test: blockdev copy ...passed 00:38:57.298 Suite: bdevio tests on: nvme0n2 00:38:57.298 Test: blockdev write read block ...passed 00:38:57.298 Test: blockdev write zeroes read block ...passed 00:38:57.298 Test: blockdev write zeroes read no split ...passed 00:38:57.298 Test: blockdev write zeroes read split ...passed 00:38:57.298 Test: blockdev write zeroes read split partial ...passed 00:38:57.298 Test: blockdev reset ...passed 00:38:57.298 Test: blockdev write read 8 blocks ...passed 00:38:57.298 Test: blockdev write read size > 128k ...passed 00:38:57.298 Test: blockdev write read invalid size ...passed 00:38:57.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.298 Test: blockdev write read max offset ...passed 00:38:57.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.298 Test: blockdev writev readv 8 blocks ...passed 00:38:57.298 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.298 Test: blockdev writev readv block ...passed 00:38:57.298 Test: blockdev writev readv size > 128k ...passed 00:38:57.298 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.298 Test: blockdev comparev and writev ...passed 00:38:57.298 Test: blockdev nvme passthru rw ...passed 00:38:57.298 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.298 Test: blockdev nvme admin passthru ...passed 00:38:57.298 Test: blockdev copy ...passed 00:38:57.298 Suite: bdevio tests on: nvme0n1 00:38:57.298 Test: blockdev write read block ...passed 00:38:57.298 Test: blockdev write zeroes read block ...passed 00:38:57.298 Test: blockdev write zeroes read no split ...passed 00:38:57.298 Test: blockdev write zeroes read split ...passed 00:38:57.298 Test: blockdev write zeroes read split partial ...passed 00:38:57.298 Test: blockdev reset ...passed 00:38:57.298 Test: blockdev write read 8 blocks ...passed 00:38:57.298 Test: blockdev write read size > 128k ...passed 00:38:57.298 Test: blockdev write read invalid size ...passed 00:38:57.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:57.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:57.298 Test: blockdev write read max offset ...passed 00:38:57.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:57.298 Test: blockdev writev readv 8 blocks ...passed 00:38:57.298 Test: blockdev writev readv 30 x 1block ...passed 00:38:57.298 Test: blockdev writev readv block ...passed 00:38:57.298 Test: blockdev writev readv size > 128k ...passed 00:38:57.298 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:57.298 Test: blockdev comparev and writev ...passed 00:38:57.298 Test: blockdev nvme passthru rw ...passed 00:38:57.298 Test: blockdev nvme passthru vendor specific ...passed 00:38:57.298 Test: blockdev nvme admin passthru ...passed 00:38:57.298 Test: blockdev copy ...passed 00:38:57.298 00:38:57.298 Run Summary: Type Total Ran Passed Failed Inactive 00:38:57.298 suites 6 6 n/a 0 0 00:38:57.298 tests 138 138 138 0 0 00:38:57.298 asserts 780 780 780 0 n/a 00:38:57.298 00:38:57.298 Elapsed time = 1.262 seconds 00:38:57.298 0 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72346 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72346 ']' 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72346 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72346 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72346' 00:38:57.298 killing process with pid 72346 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72346 00:38:57.298 23:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72346 00:38:58.244 23:20:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:38:58.244 00:38:58.244 real 0m2.406s 00:38:58.244 user 0m5.769s 00:38:58.244 sys 0m0.399s 00:38:58.244 23:20:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.244 ************************************ 00:38:58.244 END TEST bdev_bounds 00:38:58.244 ************************************ 00:38:58.244 23:20:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:58.244 23:20:38 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:38:58.244 23:20:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:58.244 23:20:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.244 23:20:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:58.244 ************************************ 00:38:58.244 START TEST bdev_nbd 00:38:58.244 ************************************ 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72405 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72405 /var/tmp/spdk-nbd.sock 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72405 ']' 00:38:58.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:58.244 23:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:58.505 [2024-12-09 23:20:38.903171] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:58.506 [2024-12-09 23:20:38.903317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:58.506 [2024-12-09 23:20:39.069264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.767 [2024-12-09 23:20:39.203952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:38:59.339 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:59.600 23:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:59.600 1+0 records in 00:38:59.600 1+0 records out 00:38:59.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084199 s, 4.9 MB/s 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:38:59.600 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:59.862 1+0 records in 00:38:59.862 1+0 records out 00:38:59.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000956586 s, 4.3 MB/s 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:38:59.862 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:00.122 1+0 records in 00:39:00.122 1+0 records out 00:39:00.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128192 s, 3.2 MB/s 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.122 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:39:00.123 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:39:00.385 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:39:00.385 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:39:00.385 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:39:00.385 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:00.385 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:00.386 1+0 records in 00:39:00.386 1+0 records out 00:39:00.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137491 s, 3.0 MB/s 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:00.386 23:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:39:00.386 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:00.655 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:00.656 1+0 records in 00:39:00.656 1+0 records out 00:39:00.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743866 s, 5.5 MB/s 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:39:00.656 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:00.939 1+0 records in 00:39:00.939 1+0 records out 00:39:00.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124861 s, 3.3 MB/s 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd0", 00:39:00.939 "bdev_name": "nvme0n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd1", 00:39:00.939 "bdev_name": "nvme0n2" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd2", 00:39:00.939 "bdev_name": "nvme0n3" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd3", 00:39:00.939 "bdev_name": "nvme1n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd4", 00:39:00.939 "bdev_name": "nvme2n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd5", 00:39:00.939 "bdev_name": "nvme3n1" 00:39:00.939 } 00:39:00.939 ]' 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd0", 00:39:00.939 "bdev_name": "nvme0n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd1", 00:39:00.939 "bdev_name": "nvme0n2" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd2", 00:39:00.939 "bdev_name": "nvme0n3" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd3", 00:39:00.939 "bdev_name": "nvme1n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd4", 00:39:00.939 "bdev_name": "nvme2n1" 00:39:00.939 }, 00:39:00.939 { 00:39:00.939 "nbd_device": "/dev/nbd5", 00:39:00.939 "bdev_name": "nvme3n1" 00:39:00.939 } 00:39:00.939 ]' 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:00.939 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:01.200 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:01.201 23:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:01.462 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:01.724 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:01.987 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:02.249 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:02.510 23:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:02.510 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:02.770 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:39:02.770 /dev/nbd0 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.029 1+0 records in 00:39:03.029 1+0 records out 00:39:03.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708544 s, 5.8 MB/s 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:03.029 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:39:03.029 /dev/nbd1 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.030 1+0 records in 00:39:03.030 1+0 records out 00:39:03.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768395 s, 5.3 MB/s 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:03.030 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:39:03.291 /dev/nbd10 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.291 1+0 records in 00:39:03.291 1+0 records out 00:39:03.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103098 s, 4.0 MB/s 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:03.291 23:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:39:03.552 /dev/nbd11 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.552 1+0 records in 00:39:03.552 1+0 records out 00:39:03.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119321 s, 3.4 MB/s 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:03.552 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:39:03.813 /dev/nbd12 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.813 1+0 records in 00:39:03.813 1+0 records out 00:39:03.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125945 s, 3.3 MB/s 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:03.813 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:39:04.075 /dev/nbd13 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:04.075 1+0 records in 00:39:04.075 1+0 records out 00:39:04.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010919 s, 3.8 MB/s 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:04.075 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:04.336 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:04.336 { 00:39:04.336 "nbd_device": "/dev/nbd0", 00:39:04.337 "bdev_name": "nvme0n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd1", 00:39:04.337 "bdev_name": "nvme0n2" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd10", 00:39:04.337 "bdev_name": "nvme0n3" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd11", 00:39:04.337 "bdev_name": "nvme1n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd12", 00:39:04.337 "bdev_name": "nvme2n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd13", 00:39:04.337 "bdev_name": "nvme3n1" 00:39:04.337 } 00:39:04.337 ]' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd0", 00:39:04.337 "bdev_name": "nvme0n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd1", 00:39:04.337 "bdev_name": "nvme0n2" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd10", 00:39:04.337 "bdev_name": "nvme0n3" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd11", 00:39:04.337 "bdev_name": "nvme1n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd12", 00:39:04.337 "bdev_name": "nvme2n1" 00:39:04.337 }, 00:39:04.337 { 00:39:04.337 "nbd_device": "/dev/nbd13", 00:39:04.337 "bdev_name": "nvme3n1" 00:39:04.337 } 00:39:04.337 ]' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:04.337 /dev/nbd1 00:39:04.337 /dev/nbd10 00:39:04.337 /dev/nbd11 00:39:04.337 /dev/nbd12 00:39:04.337 /dev/nbd13' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:04.337 /dev/nbd1 00:39:04.337 /dev/nbd10 00:39:04.337 /dev/nbd11 00:39:04.337 /dev/nbd12 00:39:04.337 /dev/nbd13' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:04.337 256+0 records in 00:39:04.337 256+0 records out 00:39:04.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00832645 s, 126 MB/s 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:04.337 23:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:04.598 256+0 records in 00:39:04.598 256+0 records out 00:39:04.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254556 s, 4.1 MB/s 00:39:04.598 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:04.598 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:04.860 256+0 records in 00:39:04.860 256+0 records out 00:39:04.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209168 s, 5.0 MB/s 00:39:04.860 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:04.860 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:39:05.129 256+0 records in 00:39:05.129 256+0 records out 00:39:05.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195601 s, 5.4 MB/s 00:39:05.129 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:05.129 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:39:05.396 256+0 records in 00:39:05.396 256+0 records out 00:39:05.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234849 s, 4.5 MB/s 00:39:05.396 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:05.396 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:39:05.396 256+0 records in 00:39:05.396 256+0 records out 00:39:05.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206075 s, 5.1 MB/s 00:39:05.396 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:05.396 23:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:39:05.658 256+0 records in 00:39:05.658 256+0 records out 00:39:05.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235776 s, 4.4 MB/s 00:39:05.658 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:05.659 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:39:05.920 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:05.921 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.182 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.444 23:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.706 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.968 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:07.228 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:39:07.487 23:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:07.487 malloc_lvol_verify 00:39:07.487 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:07.746 96dde1fc-1e67-4a3a-b50e-bdc95b57a92d 00:39:07.746 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:08.006 79a779cb-2db3-42bf-81e2-74d872991a14 00:39:08.006 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:08.266 /dev/nbd0 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:39:08.266 mke2fs 1.47.0 (5-Feb-2023) 00:39:08.266 Discarding device blocks: 0/4096 done 00:39:08.266 Creating filesystem with 4096 1k blocks and 1024 inodes 00:39:08.266 00:39:08.266 Allocating group tables: 0/1 done 00:39:08.266 Writing inode tables: 0/1 done 00:39:08.266 Creating journal (1024 blocks): done 00:39:08.266 Writing superblocks and filesystem accounting information: 0/1 done 00:39:08.266 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:08.266 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72405 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72405 ']' 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72405 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72405 00:39:08.527 killing process with pid 72405 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72405' 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72405 00:39:08.527 23:20:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72405 00:39:09.473 23:20:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:39:09.473 00:39:09.473 real 0m11.055s 00:39:09.473 user 0m14.658s 00:39:09.473 sys 0m3.877s 00:39:09.473 23:20:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.473 23:20:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:09.473 ************************************ 00:39:09.473 END TEST bdev_nbd 00:39:09.473 ************************************ 00:39:09.473 23:20:49 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:39:09.473 23:20:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:39:09.473 23:20:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:39:09.473 23:20:49 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:39:09.473 23:20:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:09.473 23:20:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.474 23:20:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:09.474 ************************************ 00:39:09.474 START TEST bdev_fio 00:39:09.474 ************************************ 00:39:09.474 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:39:09.474 23:20:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:09.474 ************************************ 00:39:09.474 START TEST bdev_fio_rw_verify 00:39:09.474 ************************************ 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:09.474 23:20:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:09.737 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:09.737 fio-3.35 00:39:09.737 Starting 6 threads 00:39:21.989 00:39:21.989 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72812: Mon Dec 9 23:21:00 2024 00:39:21.989 read: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(614MiB/10002msec) 00:39:21.989 slat (usec): min=2, max=2069, avg= 7.02, stdev=19.08 00:39:21.989 clat (usec): min=69, max=8123, avg=1191.48, stdev=751.39 00:39:21.989 lat (usec): min=74, max=8139, avg=1198.49, stdev=752.28 00:39:21.989 clat percentiles (usec): 00:39:21.989 | 50.000th=[ 1090], 99.000th=[ 3523], 99.900th=[ 4752], 99.990th=[ 7832], 00:39:21.989 | 99.999th=[ 8094] 00:39:21.989 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(621MiB/10002msec); 0 zone resets 00:39:21.989 slat (usec): min=11, max=5221, avg=44.50, stdev=148.07 00:39:21.989 clat (usec): min=73, max=8248, avg=1509.71, stdev=833.01 00:39:21.989 lat (usec): min=89, max=8280, avg=1554.21, stdev=846.82 00:39:21.989 clat percentiles (usec): 00:39:21.989 | 50.000th=[ 1385], 99.000th=[ 4047], 99.900th=[ 5800], 99.990th=[ 7504], 00:39:21.989 | 99.999th=[ 8225] 00:39:21.989 bw ( KiB/s): min=48979, max=101048, per=100.00%, avg=63798.68, stdev=2516.33, samples=114 00:39:21.989 iops : min=12242, max=25262, avg=15949.00, stdev=629.12, samples=114 00:39:21.989 lat (usec) : 100=0.01%, 250=3.97%, 500=9.20%, 750=11.56%, 1000=12.86% 00:39:21.989 lat (msec) : 2=44.02%, 4=17.61%, 10=0.76% 00:39:21.989 cpu : usr=39.78%, sys=34.24%, ctx=5652, majf=0, minf=15476 00:39:21.989 IO depths : 1=10.7%, 2=23.1%, 4=51.6%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:21.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.989 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.989 issued rwts: total=157307,158946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:21.989 00:39:21.989 Run status group 0 (all jobs): 00:39:21.989 READ: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=614MiB (644MB), run=10002-10002msec 00:39:21.989 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=621MiB (651MB), run=10002-10002msec 00:39:21.989 ----------------------------------------------------- 00:39:21.990 Suppressions used: 00:39:21.990 count bytes template 00:39:21.990 6 48 /usr/src/fio/parse.c 00:39:21.990 1540 147840 /usr/src/fio/iolog.c 00:39:21.990 1 8 libtcmalloc_minimal.so 00:39:21.990 1 904 libcrypto.so 00:39:21.990 ----------------------------------------------------- 00:39:21.990 00:39:21.990 00:39:21.990 real 0m11.877s 00:39:21.990 user 0m25.282s 00:39:21.990 sys 0m20.879s 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.990 ************************************ 00:39:21.990 END TEST bdev_fio_rw_verify 00:39:21.990 ************************************ 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:21.990 23:21:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5a3a9197-6722-48fe-9725-2a4684dddac9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5a3a9197-6722-48fe-9725-2a4684dddac9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9c207bd5-50cb-449b-8545-a279f1089e03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c207bd5-50cb-449b-8545-a279f1089e03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4adedc41-81b1-4c54-8c5d-4ceffb33b634"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4adedc41-81b1-4c54-8c5d-4ceffb33b634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "259165c7-77e6-4798-a695-09ee3f69365d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "259165c7-77e6-4798-a695-09ee3f69365d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4f001de0-e2bf-40ab-9609-72f01483e3c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4f001de0-e2bf-40ab-9609-72f01483e3c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b7127b02-02b3-4d07-b645-adf3c056f606"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b7127b02-02b3-4d07-b645-adf3c056f606",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:39:21.990 /home/vagrant/spdk_repo/spdk 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:39:21.990 00:39:21.990 real 0m12.063s 00:39:21.990 user 0m25.363s 00:39:21.990 sys 0m20.961s 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.990 ************************************ 00:39:21.990 END TEST bdev_fio 00:39:21.990 23:21:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:21.991 ************************************ 00:39:21.991 23:21:02 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:21.991 23:21:02 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:21.991 23:21:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:39:21.991 23:21:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.991 23:21:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:21.991 ************************************ 00:39:21.991 START TEST bdev_verify 00:39:21.991 ************************************ 00:39:21.991 23:21:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:21.991 [2024-12-09 23:21:02.145326] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:21.991 [2024-12-09 23:21:02.145471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72985 ] 00:39:21.991 [2024-12-09 23:21:02.309317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:21.991 [2024-12-09 23:21:02.432706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.991 [2024-12-09 23:21:02.432819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.564 Running I/O for 5 seconds... 00:39:24.457 24546.00 IOPS, 95.88 MiB/s [2024-12-09T23:21:06.481Z] 23600.00 IOPS, 92.19 MiB/s [2024-12-09T23:21:07.475Z] 23750.33 IOPS, 92.77 MiB/s [2024-12-09T23:21:08.069Z] 23094.00 IOPS, 90.21 MiB/s [2024-12-09T23:21:08.331Z] 23047.20 IOPS, 90.03 MiB/s 00:39:27.695 Latency(us) 00:39:27.695 [2024-12-09T23:21:08.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.695 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0x80000 00:39:27.695 nvme0n1 : 5.07 1768.87 6.91 0.00 0.00 72221.05 10687.41 67350.84 00:39:27.695 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x80000 length 0x80000 00:39:27.695 nvme0n1 : 5.06 1819.87 7.11 0.00 0.00 70222.61 10889.06 74610.22 00:39:27.695 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0x80000 00:39:27.695 nvme0n2 : 5.07 1766.91 6.90 0.00 0.00 72151.26 9880.81 63317.86 00:39:27.695 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x80000 length 0x80000 00:39:27.695 nvme0n2 : 5.07 1819.16 7.11 0.00 0.00 70133.95 8116.38 70173.93 00:39:27.695 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0x80000 00:39:27.695 nvme0n3 : 5.08 1765.36 6.90 0.00 0.00 72060.60 12401.43 64931.05 00:39:27.695 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x80000 length 0x80000 00:39:27.695 nvme0n3 : 5.07 1817.95 7.10 0.00 0.00 70072.28 13913.80 60898.07 00:39:27.695 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0x20000 00:39:27.695 nvme1n1 : 5.09 1760.49 6.88 0.00 0.00 72109.67 10032.05 73803.62 00:39:27.695 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x20000 length 0x20000 00:39:27.695 nvme1n1 : 5.07 1817.23 7.10 0.00 0.00 69989.98 10889.06 67350.84 00:39:27.695 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0xbd0bd 00:39:27.695 nvme2n1 : 5.10 2334.70 9.12 0.00 0.00 54141.53 6377.16 65737.65 00:39:27.695 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:39:27.695 nvme2n1 : 5.09 2419.86 9.45 0.00 0.00 52408.10 4234.63 70577.23 00:39:27.695 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0x0 length 0xa0000 00:39:27.695 nvme3n1 : 5.10 1806.48 7.06 0.00 0.00 70053.20 5671.38 71383.83 00:39:27.695 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.695 Verification LBA range: start 0xa0000 length 0xa0000 00:39:27.695 nvme3n1 : 5.06 1871.63 7.31 0.00 0.00 67739.47 5570.56 72593.72 00:39:27.695 [2024-12-09T23:21:08.331Z] =================================================================================================================== 00:39:27.695 [2024-12-09T23:21:08.331Z] Total : 22768.52 88.94 0.00 0.00 67006.54 4234.63 74610.22 00:39:28.641 00:39:28.641 real 0m6.912s 00:39:28.641 user 0m11.076s 00:39:28.641 sys 0m1.563s 00:39:28.641 23:21:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.641 ************************************ 00:39:28.641 END TEST bdev_verify 00:39:28.641 ************************************ 00:39:28.641 23:21:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:28.641 23:21:09 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:28.641 23:21:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:39:28.641 23:21:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.641 23:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:28.641 ************************************ 00:39:28.641 START TEST bdev_verify_big_io 00:39:28.641 ************************************ 00:39:28.641 23:21:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:28.641 [2024-12-09 23:21:09.138799] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:28.641 [2024-12-09 23:21:09.138940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73099 ] 00:39:28.910 [2024-12-09 23:21:09.307879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:28.910 [2024-12-09 23:21:09.454591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.910 [2024-12-09 23:21:09.454683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.493 Running I/O for 5 seconds... 00:39:35.341 1584.00 IOPS, 99.00 MiB/s [2024-12-09T23:21:15.977Z] 2696.00 IOPS, 168.50 MiB/s [2024-12-09T23:21:16.547Z] 3167.00 IOPS, 197.94 MiB/s 00:39:35.911 Latency(us) 00:39:35.911 [2024-12-09T23:21:16.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.911 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0x8000 00:39:35.911 nvme0n1 : 5.82 118.31 7.39 0.00 0.00 1040806.06 86305.87 1419610.58 00:39:35.911 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x8000 length 0x8000 00:39:35.911 nvme0n1 : 5.83 131.00 8.19 0.00 0.00 944934.89 8973.39 1593835.52 00:39:35.911 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0x8000 00:39:35.911 nvme0n2 : 5.83 148.21 9.26 0.00 0.00 818350.63 10737.82 832408.02 00:39:35.911 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x8000 length 0x8000 00:39:35.911 nvme0n2 : 5.90 130.22 8.14 0.00 0.00 915401.12 64124.46 1729343.80 00:39:35.911 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0x8000 00:39:35.911 nvme0n3 : 5.89 115.86 7.24 0.00 0.00 1018651.49 64931.05 1755154.90 00:39:35.911 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x8000 length 0x8000 00:39:35.911 nvme0n3 : 5.77 124.70 7.79 0.00 0.00 942045.27 127442.31 690446.97 00:39:35.911 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0x2000 00:39:35.911 nvme1n1 : 6.41 117.84 7.36 0.00 0.00 939318.60 778.24 1426063.36 00:39:35.911 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x2000 length 0x2000 00:39:35.911 nvme1n1 : 6.41 147.34 9.21 0.00 0.00 737585.00 639.61 871124.68 00:39:35.911 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0xbd0b 00:39:35.911 nvme2n1 : 5.83 159.08 9.94 0.00 0.00 699717.13 8721.33 1535760.54 00:39:35.911 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0xbd0b length 0xbd0b 00:39:35.911 nvme2n1 : 5.91 146.26 9.14 0.00 0.00 762858.50 10687.41 1755154.90 00:39:35.911 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0x0 length 0xa000 00:39:35.911 nvme3n1 : 5.90 162.75 10.17 0.00 0.00 662631.87 4108.60 1232480.10 00:39:35.911 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:35.911 Verification LBA range: start 0xa000 length 0xa000 00:39:35.911 nvme3n1 : 5.91 150.14 9.38 0.00 0.00 719310.35 4587.52 1187310.67 00:39:35.911 [2024-12-09T23:21:16.547Z] =================================================================================================================== 00:39:35.911 [2024-12-09T23:21:16.547Z] Total : 1651.71 103.23 0.00 0.00 835609.97 639.61 1755154.90 00:39:37.297 ************************************ 00:39:37.297 END TEST bdev_verify_big_io 00:39:37.297 ************************************ 00:39:37.297 00:39:37.297 real 0m8.523s 00:39:37.297 user 0m15.401s 00:39:37.297 sys 0m0.606s 00:39:37.297 23:21:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.297 23:21:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.297 23:21:17 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:37.297 23:21:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:39:37.297 23:21:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.297 23:21:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.297 ************************************ 00:39:37.297 START TEST bdev_write_zeroes 00:39:37.297 ************************************ 00:39:37.298 23:21:17 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:37.298 [2024-12-09 23:21:17.739500] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:37.298 [2024-12-09 23:21:17.739646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:39:37.298 [2024-12-09 23:21:17.901637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.559 [2024-12-09 23:21:18.045571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.132 Running I/O for 1 seconds... 00:39:39.079 71360.00 IOPS, 278.75 MiB/s 00:39:39.079 Latency(us) 00:39:39.079 [2024-12-09T23:21:19.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:39.079 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme0n1 : 1.02 11703.56 45.72 0.00 0.00 10924.73 8267.62 18652.55 00:39:39.079 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme0n2 : 1.02 11689.27 45.66 0.00 0.00 10928.37 8267.62 18854.20 00:39:39.079 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme0n3 : 1.02 11675.53 45.61 0.00 0.00 10928.91 8267.62 19257.50 00:39:39.079 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme1n1 : 1.02 11662.34 45.56 0.00 0.00 10931.11 7965.14 19559.98 00:39:39.079 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme2n1 : 1.03 12270.76 47.93 0.00 0.00 10373.23 4285.05 16837.71 00:39:39.079 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:39.079 nvme3n1 : 1.02 11772.71 45.99 0.00 0.00 10801.84 6503.19 19559.98 00:39:39.079 [2024-12-09T23:21:19.715Z] =================================================================================================================== 00:39:39.079 [2024-12-09T23:21:19.715Z] Total : 70774.18 276.46 0.00 0.00 10810.37 4285.05 19559.98 00:39:40.023 00:39:40.023 real 0m2.723s 00:39:40.023 user 0m1.982s 00:39:40.023 sys 0m0.542s 00:39:40.023 23:21:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:40.023 23:21:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:40.023 ************************************ 00:39:40.023 END TEST bdev_write_zeroes 00:39:40.023 ************************************ 00:39:40.023 23:21:20 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.023 23:21:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:39:40.023 23:21:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:40.023 23:21:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:40.023 ************************************ 00:39:40.023 START TEST bdev_json_nonenclosed 00:39:40.023 ************************************ 00:39:40.023 23:21:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.023 [2024-12-09 23:21:20.508568] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:40.023 [2024-12-09 23:21:20.508689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73271 ] 00:39:40.284 [2024-12-09 23:21:20.668652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.284 [2024-12-09 23:21:20.773134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.284 [2024-12-09 23:21:20.773218] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:40.284 [2024-12-09 23:21:20.773236] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:40.284 [2024-12-09 23:21:20.773246] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:40.544 ************************************ 00:39:40.544 END TEST bdev_json_nonenclosed 00:39:40.544 ************************************ 00:39:40.544 00:39:40.544 real 0m0.508s 00:39:40.544 user 0m0.318s 00:39:40.544 sys 0m0.085s 00:39:40.545 23:21:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:40.545 23:21:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:40.545 23:21:20 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.545 23:21:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:39:40.545 23:21:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:40.545 23:21:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:40.545 ************************************ 00:39:40.545 START TEST bdev_json_nonarray 00:39:40.545 ************************************ 00:39:40.545 23:21:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.545 [2024-12-09 23:21:21.075219] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:40.545 [2024-12-09 23:21:21.075341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73291 ] 00:39:40.806 [2024-12-09 23:21:21.233598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.806 [2024-12-09 23:21:21.338571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.806 [2024-12-09 23:21:21.338658] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:40.806 [2024-12-09 23:21:21.338677] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:40.806 [2024-12-09 23:21:21.338687] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:41.069 ************************************ 00:39:41.069 END TEST bdev_json_nonarray 00:39:41.069 ************************************ 00:39:41.069 00:39:41.069 real 0m0.510s 00:39:41.069 user 0m0.316s 00:39:41.069 sys 0m0.090s 00:39:41.069 23:21:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.069 23:21:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:39:41.069 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:39:41.070 23:21:21 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:41.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:48.237 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:39:48.237 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:39:48.237 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:48.237 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:39:48.237 00:39:48.237 real 0m58.307s 00:39:48.237 user 1m21.181s 00:39:48.237 sys 0m40.413s 00:39:48.237 23:21:28 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.237 23:21:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:48.237 ************************************ 00:39:48.237 END TEST blockdev_xnvme 00:39:48.237 ************************************ 00:39:48.237 23:21:28 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:39:48.237 23:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:48.237 23:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.237 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:39:48.237 ************************************ 00:39:48.237 START TEST ublk 00:39:48.237 ************************************ 00:39:48.237 23:21:28 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:39:48.237 * Looking for test storage... 00:39:48.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:39:48.237 23:21:28 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.237 23:21:28 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.237 23:21:28 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.237 23:21:28 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.237 23:21:28 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.237 23:21:28 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.237 23:21:28 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.237 23:21:28 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.237 23:21:28 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.237 23:21:28 ublk -- scripts/common.sh@344 -- # case "$op" in 00:39:48.237 23:21:28 ublk -- scripts/common.sh@345 -- # : 1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.237 23:21:28 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.237 23:21:28 ublk -- scripts/common.sh@365 -- # decimal 1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@353 -- # local d=1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.237 23:21:28 ublk -- scripts/common.sh@355 -- # echo 1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.237 23:21:28 ublk -- scripts/common.sh@366 -- # decimal 2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@353 -- # local d=2 00:39:48.237 23:21:28 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.237 23:21:28 ublk -- scripts/common.sh@355 -- # echo 2 00:39:48.498 23:21:28 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.498 23:21:28 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.498 23:21:28 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.498 23:21:28 ublk -- scripts/common.sh@368 -- # return 0 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.498 --rc genhtml_branch_coverage=1 00:39:48.498 --rc genhtml_function_coverage=1 00:39:48.498 --rc genhtml_legend=1 00:39:48.498 --rc geninfo_all_blocks=1 00:39:48.498 --rc geninfo_unexecuted_blocks=1 00:39:48.498 00:39:48.498 ' 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.498 --rc genhtml_branch_coverage=1 00:39:48.498 --rc genhtml_function_coverage=1 00:39:48.498 --rc genhtml_legend=1 00:39:48.498 --rc geninfo_all_blocks=1 00:39:48.498 --rc geninfo_unexecuted_blocks=1 00:39:48.498 00:39:48.498 ' 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.498 --rc genhtml_branch_coverage=1 00:39:48.498 --rc genhtml_function_coverage=1 00:39:48.498 --rc genhtml_legend=1 00:39:48.498 --rc geninfo_all_blocks=1 00:39:48.498 --rc geninfo_unexecuted_blocks=1 00:39:48.498 00:39:48.498 ' 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.498 --rc genhtml_branch_coverage=1 00:39:48.498 --rc genhtml_function_coverage=1 00:39:48.498 --rc genhtml_legend=1 00:39:48.498 --rc geninfo_all_blocks=1 00:39:48.498 --rc geninfo_unexecuted_blocks=1 00:39:48.498 00:39:48.498 ' 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:39:48.498 23:21:28 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:39:48.498 23:21:28 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:39:48.498 23:21:28 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:39:48.498 23:21:28 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:39:48.498 23:21:28 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:39:48.498 23:21:28 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:39:48.498 23:21:28 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:39:48.498 23:21:28 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:39:48.498 23:21:28 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.498 23:21:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:39:48.498 ************************************ 00:39:48.498 START TEST test_save_ublk_config 00:39:48.498 ************************************ 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:39:48.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73597 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73597 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73597 ']' 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.498 23:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:48.498 [2024-12-09 23:21:29.002385] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:48.498 [2024-12-09 23:21:29.002759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73597 ] 00:39:48.759 [2024-12-09 23:21:29.168181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.760 [2024-12-09 23:21:29.317950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:49.703 [2024-12-09 23:21:30.141009] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:39:49.703 [2024-12-09 23:21:30.141994] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:39:49.703 malloc0 00:39:49.703 [2024-12-09 23:21:30.213151] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:39:49.703 [2024-12-09 23:21:30.213252] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:39:49.703 [2024-12-09 23:21:30.213263] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:39:49.703 [2024-12-09 23:21:30.213271] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:39:49.703 [2024-12-09 23:21:30.222121] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:39:49.703 [2024-12-09 23:21:30.222154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:39:49.703 [2024-12-09 23:21:30.228135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:39:49.703 [2024-12-09 23:21:30.228560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:39:49.703 [2024-12-09 23:21:30.237123] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:39:49.703 0 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.703 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:49.965 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.965 23:21:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:39:49.965 "subsystems": [ 00:39:49.965 { 00:39:49.965 "subsystem": "fsdev", 00:39:49.965 "config": [ 00:39:49.965 { 00:39:49.965 "method": "fsdev_set_opts", 00:39:49.965 "params": { 00:39:49.965 "fsdev_io_pool_size": 65535, 00:39:49.965 "fsdev_io_cache_size": 256 00:39:49.965 } 00:39:49.965 } 00:39:49.965 ] 00:39:49.965 }, 00:39:49.965 { 00:39:49.966 "subsystem": "keyring", 00:39:49.966 "config": [] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "iobuf", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "iobuf_set_options", 00:39:49.966 "params": { 00:39:49.966 "small_pool_count": 8192, 00:39:49.966 "large_pool_count": 1024, 00:39:49.966 "small_bufsize": 8192, 00:39:49.966 "large_bufsize": 135168, 00:39:49.966 "enable_numa": false 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "sock", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "sock_set_default_impl", 00:39:49.966 "params": { 00:39:49.966 "impl_name": "posix" 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "sock_impl_set_options", 00:39:49.966 "params": { 00:39:49.966 "impl_name": "ssl", 00:39:49.966 "recv_buf_size": 4096, 00:39:49.966 "send_buf_size": 4096, 00:39:49.966 "enable_recv_pipe": true, 00:39:49.966 "enable_quickack": false, 00:39:49.966 "enable_placement_id": 0, 00:39:49.966 "enable_zerocopy_send_server": true, 00:39:49.966 "enable_zerocopy_send_client": false, 00:39:49.966 "zerocopy_threshold": 0, 00:39:49.966 "tls_version": 0, 00:39:49.966 "enable_ktls": false 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "sock_impl_set_options", 00:39:49.966 "params": { 00:39:49.966 "impl_name": "posix", 00:39:49.966 "recv_buf_size": 2097152, 00:39:49.966 "send_buf_size": 2097152, 00:39:49.966 "enable_recv_pipe": true, 00:39:49.966 "enable_quickack": false, 00:39:49.966 "enable_placement_id": 0, 00:39:49.966 "enable_zerocopy_send_server": true, 00:39:49.966 "enable_zerocopy_send_client": false, 00:39:49.966 "zerocopy_threshold": 0, 00:39:49.966 "tls_version": 0, 00:39:49.966 "enable_ktls": false 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "vmd", 00:39:49.966 "config": [] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "accel", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "accel_set_options", 00:39:49.966 "params": { 00:39:49.966 "small_cache_size": 128, 00:39:49.966 "large_cache_size": 16, 00:39:49.966 "task_count": 2048, 00:39:49.966 "sequence_count": 2048, 00:39:49.966 "buf_count": 2048 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "bdev", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "bdev_set_options", 00:39:49.966 "params": { 00:39:49.966 "bdev_io_pool_size": 65535, 00:39:49.966 "bdev_io_cache_size": 256, 00:39:49.966 "bdev_auto_examine": true, 00:39:49.966 "iobuf_small_cache_size": 128, 00:39:49.966 "iobuf_large_cache_size": 16 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_raid_set_options", 00:39:49.966 "params": { 00:39:49.966 "process_window_size_kb": 1024, 00:39:49.966 "process_max_bandwidth_mb_sec": 0 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_iscsi_set_options", 00:39:49.966 "params": { 00:39:49.966 "timeout_sec": 30 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_nvme_set_options", 00:39:49.966 "params": { 00:39:49.966 "action_on_timeout": "none", 00:39:49.966 "timeout_us": 0, 00:39:49.966 "timeout_admin_us": 0, 00:39:49.966 "keep_alive_timeout_ms": 10000, 00:39:49.966 "arbitration_burst": 0, 00:39:49.966 "low_priority_weight": 0, 00:39:49.966 "medium_priority_weight": 0, 00:39:49.966 "high_priority_weight": 0, 00:39:49.966 "nvme_adminq_poll_period_us": 10000, 00:39:49.966 "nvme_ioq_poll_period_us": 0, 00:39:49.966 "io_queue_requests": 0, 00:39:49.966 "delay_cmd_submit": true, 00:39:49.966 "transport_retry_count": 4, 00:39:49.966 "bdev_retry_count": 3, 00:39:49.966 "transport_ack_timeout": 0, 00:39:49.966 "ctrlr_loss_timeout_sec": 0, 00:39:49.966 "reconnect_delay_sec": 0, 00:39:49.966 "fast_io_fail_timeout_sec": 0, 00:39:49.966 "disable_auto_failback": false, 00:39:49.966 "generate_uuids": false, 00:39:49.966 "transport_tos": 0, 00:39:49.966 "nvme_error_stat": false, 00:39:49.966 "rdma_srq_size": 0, 00:39:49.966 "io_path_stat": false, 00:39:49.966 "allow_accel_sequence": false, 00:39:49.966 "rdma_max_cq_size": 0, 00:39:49.966 "rdma_cm_event_timeout_ms": 0, 00:39:49.966 "dhchap_digests": [ 00:39:49.966 "sha256", 00:39:49.966 "sha384", 00:39:49.966 "sha512" 00:39:49.966 ], 00:39:49.966 "dhchap_dhgroups": [ 00:39:49.966 "null", 00:39:49.966 "ffdhe2048", 00:39:49.966 "ffdhe3072", 00:39:49.966 "ffdhe4096", 00:39:49.966 "ffdhe6144", 00:39:49.966 "ffdhe8192" 00:39:49.966 ] 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_nvme_set_hotplug", 00:39:49.966 "params": { 00:39:49.966 "period_us": 100000, 00:39:49.966 "enable": false 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_malloc_create", 00:39:49.966 "params": { 00:39:49.966 "name": "malloc0", 00:39:49.966 "num_blocks": 8192, 00:39:49.966 "block_size": 4096, 00:39:49.966 "physical_block_size": 4096, 00:39:49.966 "uuid": "efa9596b-7a84-495d-af78-b998ec5114a6", 00:39:49.966 "optimal_io_boundary": 0, 00:39:49.966 "md_size": 0, 00:39:49.966 "dif_type": 0, 00:39:49.966 "dif_is_head_of_md": false, 00:39:49.966 "dif_pi_format": 0 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "bdev_wait_for_examine" 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "scsi", 00:39:49.966 "config": null 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "scheduler", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "framework_set_scheduler", 00:39:49.966 "params": { 00:39:49.966 "name": "static" 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "vhost_scsi", 00:39:49.966 "config": [] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "vhost_blk", 00:39:49.966 "config": [] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "ublk", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "ublk_create_target", 00:39:49.966 "params": { 00:39:49.966 "cpumask": "1" 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "ublk_start_disk", 00:39:49.966 "params": { 00:39:49.966 "bdev_name": "malloc0", 00:39:49.966 "ublk_id": 0, 00:39:49.966 "num_queues": 1, 00:39:49.966 "queue_depth": 128 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "nbd", 00:39:49.966 "config": [] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "nvmf", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "nvmf_set_config", 00:39:49.966 "params": { 00:39:49.966 "discovery_filter": "match_any", 00:39:49.966 "admin_cmd_passthru": { 00:39:49.966 "identify_ctrlr": false 00:39:49.966 }, 00:39:49.966 "dhchap_digests": [ 00:39:49.966 "sha256", 00:39:49.966 "sha384", 00:39:49.966 "sha512" 00:39:49.966 ], 00:39:49.966 "dhchap_dhgroups": [ 00:39:49.966 "null", 00:39:49.966 "ffdhe2048", 00:39:49.966 "ffdhe3072", 00:39:49.966 "ffdhe4096", 00:39:49.966 "ffdhe6144", 00:39:49.966 "ffdhe8192" 00:39:49.966 ] 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "nvmf_set_max_subsystems", 00:39:49.966 "params": { 00:39:49.966 "max_subsystems": 1024 00:39:49.966 } 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "method": "nvmf_set_crdt", 00:39:49.966 "params": { 00:39:49.966 "crdt1": 0, 00:39:49.966 "crdt2": 0, 00:39:49.966 "crdt3": 0 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 }, 00:39:49.966 { 00:39:49.966 "subsystem": "iscsi", 00:39:49.966 "config": [ 00:39:49.966 { 00:39:49.966 "method": "iscsi_set_options", 00:39:49.966 "params": { 00:39:49.966 "node_base": "iqn.2016-06.io.spdk", 00:39:49.966 "max_sessions": 128, 00:39:49.966 "max_connections_per_session": 2, 00:39:49.966 "max_queue_depth": 64, 00:39:49.966 "default_time2wait": 2, 00:39:49.966 "default_time2retain": 20, 00:39:49.966 "first_burst_length": 8192, 00:39:49.966 "immediate_data": true, 00:39:49.966 "allow_duplicated_isid": false, 00:39:49.966 "error_recovery_level": 0, 00:39:49.966 "nop_timeout": 60, 00:39:49.966 "nop_in_interval": 30, 00:39:49.966 "disable_chap": false, 00:39:49.966 "require_chap": false, 00:39:49.966 "mutual_chap": false, 00:39:49.966 "chap_group": 0, 00:39:49.966 "max_large_datain_per_connection": 64, 00:39:49.966 "max_r2t_per_connection": 4, 00:39:49.966 "pdu_pool_size": 36864, 00:39:49.966 "immediate_data_pool_size": 16384, 00:39:49.966 "data_out_pool_size": 2048 00:39:49.966 } 00:39:49.966 } 00:39:49.966 ] 00:39:49.966 } 00:39:49.966 ] 00:39:49.967 }' 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73597 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73597 ']' 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73597 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73597 00:39:49.967 killing process with pid 73597 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73597' 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73597 00:39:49.967 23:21:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73597 00:39:51.351 [2024-12-09 23:21:31.556565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:39:51.351 [2024-12-09 23:21:31.591080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:39:51.351 [2024-12-09 23:21:31.591177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:39:51.351 [2024-12-09 23:21:31.600032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:39:51.351 [2024-12-09 23:21:31.600079] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:39:51.351 [2024-12-09 23:21:31.600090] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:39:51.351 [2024-12-09 23:21:31.600110] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:39:51.351 [2024-12-09 23:21:31.600221] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73651 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73651 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73651 ']' 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:52.293 23:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:39:52.293 "subsystems": [ 00:39:52.293 { 00:39:52.293 "subsystem": "fsdev", 00:39:52.293 "config": [ 00:39:52.293 { 00:39:52.293 "method": "fsdev_set_opts", 00:39:52.293 "params": { 00:39:52.293 "fsdev_io_pool_size": 65535, 00:39:52.293 "fsdev_io_cache_size": 256 00:39:52.293 } 00:39:52.293 } 00:39:52.293 ] 00:39:52.293 }, 00:39:52.293 { 00:39:52.293 "subsystem": "keyring", 00:39:52.293 "config": [] 00:39:52.293 }, 00:39:52.293 { 00:39:52.293 "subsystem": "iobuf", 00:39:52.293 "config": [ 00:39:52.293 { 00:39:52.293 "method": "iobuf_set_options", 00:39:52.293 "params": { 00:39:52.293 "small_pool_count": 8192, 00:39:52.293 "large_pool_count": 1024, 00:39:52.293 "small_bufsize": 8192, 00:39:52.293 "large_bufsize": 135168, 00:39:52.293 "enable_numa": false 00:39:52.293 } 00:39:52.293 } 00:39:52.293 ] 00:39:52.293 }, 00:39:52.293 { 00:39:52.293 "subsystem": "sock", 00:39:52.293 "config": [ 00:39:52.293 { 00:39:52.293 "method": "sock_set_default_impl", 00:39:52.293 "params": { 00:39:52.293 "impl_name": "posix" 00:39:52.293 } 00:39:52.293 }, 00:39:52.293 { 00:39:52.293 "method": "sock_impl_set_options", 00:39:52.293 "params": { 00:39:52.293 "impl_name": "ssl", 00:39:52.293 "recv_buf_size": 4096, 00:39:52.293 "send_buf_size": 4096, 00:39:52.293 "enable_recv_pipe": true, 00:39:52.293 "enable_quickack": false, 00:39:52.293 "enable_placement_id": 0, 00:39:52.293 "enable_zerocopy_send_server": true, 00:39:52.293 "enable_zerocopy_send_client": false, 00:39:52.293 "zerocopy_threshold": 0, 00:39:52.293 "tls_version": 0, 00:39:52.293 "enable_ktls": false 00:39:52.293 } 00:39:52.293 }, 00:39:52.293 { 00:39:52.293 "method": "sock_impl_set_options", 00:39:52.293 "params": { 00:39:52.293 "impl_name": "posix", 00:39:52.293 "recv_buf_size": 2097152, 00:39:52.294 "send_buf_size": 2097152, 00:39:52.294 "enable_recv_pipe": true, 00:39:52.294 "enable_quickack": false, 00:39:52.294 "enable_placement_id": 0, 00:39:52.294 "enable_zerocopy_send_server": true, 00:39:52.294 "enable_zerocopy_send_client": false, 00:39:52.294 "zerocopy_threshold": 0, 00:39:52.294 "tls_version": 0, 00:39:52.294 "enable_ktls": false 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "vmd", 00:39:52.294 "config": [] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "accel", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "accel_set_options", 00:39:52.294 "params": { 00:39:52.294 "small_cache_size": 128, 00:39:52.294 "large_cache_size": 16, 00:39:52.294 "task_count": 2048, 00:39:52.294 "sequence_count": 2048, 00:39:52.294 "buf_count": 2048 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "bdev", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "bdev_set_options", 00:39:52.294 "params": { 00:39:52.294 "bdev_io_pool_size": 65535, 00:39:52.294 "bdev_io_cache_size": 256, 00:39:52.294 "bdev_auto_examine": true, 00:39:52.294 "iobuf_small_cache_size": 128, 00:39:52.294 "iobuf_large_cache_size": 16 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_raid_set_options", 00:39:52.294 "params": { 00:39:52.294 "process_window_size_kb": 1024, 00:39:52.294 "process_max_bandwidth_mb_sec": 0 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_iscsi_set_options", 00:39:52.294 "params": { 00:39:52.294 "timeout_sec": 30 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_nvme_set_options", 00:39:52.294 "params": { 00:39:52.294 "action_on_timeout": "none", 00:39:52.294 "timeout_us": 0, 00:39:52.294 "timeout_admin_us": 0, 00:39:52.294 "keep_alive_timeout_ms": 10000, 00:39:52.294 "arbitration_burst": 0, 00:39:52.294 "low_priority_weight": 0, 00:39:52.294 "medium_priority_weight": 0, 00:39:52.294 "high_priority_weight": 0, 00:39:52.294 "nvme_adminq_poll_period_us": 10000, 00:39:52.294 "nvme_ioq_poll_period_us": 0, 00:39:52.294 "io_queue_requests": 0, 00:39:52.294 "delay_cmd_submit": true, 00:39:52.294 "transport_retry_count": 4, 00:39:52.294 "bdev_retry_count": 3, 00:39:52.294 "transport_ack_timeout": 0, 00:39:52.294 "ctrlr_loss_timeout_sec": 0, 00:39:52.294 "reconnect_delay_sec": 0, 00:39:52.294 "fast_io_fail_timeout_sec": 0, 00:39:52.294 "disable_auto_failback": false, 00:39:52.294 "generate_uuids": false, 00:39:52.294 "transport_tos": 0, 00:39:52.294 "nvme_error_stat": false, 00:39:52.294 "rdma_srq_size": 0, 00:39:52.294 "io_path_stat": false, 00:39:52.294 "allow_accel_sequence": false, 00:39:52.294 "rdma_max_cq_size": 0, 00:39:52.294 "rdma_cm_event_timeout_ms": 0, 00:39:52.294 "dhchap_digests": [ 00:39:52.294 "sha256", 00:39:52.294 "sha384", 00:39:52.294 "sha512" 00:39:52.294 ], 00:39:52.294 "dhchap_dhgroups": [ 00:39:52.294 "null", 00:39:52.294 "ffdhe2048", 00:39:52.294 "ffdhe3072", 00:39:52.294 "ffdhe4096", 00:39:52.294 "ffdhe6144", 00:39:52.294 "ffdhe8192" 00:39:52.294 ] 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_nvme_set_hotplug", 00:39:52.294 "params": { 00:39:52.294 "period_us": 100000, 00:39:52.294 "enable": false 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_malloc_create", 00:39:52.294 "params": { 00:39:52.294 "name": "malloc0", 00:39:52.294 "num_blocks": 8192, 00:39:52.294 "block_size": 4096, 00:39:52.294 "physical_block_size": 4096, 00:39:52.294 "uuid": "efa9596b-7a84-495d-af78-b998ec5114a6", 00:39:52.294 "optimal_io_boundary": 0, 00:39:52.294 "md_size": 0, 00:39:52.294 "dif_type": 0, 00:39:52.294 "dif_is_head_of_md": false, 00:39:52.294 "dif_pi_format": 0 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_wait_for_examine" 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "scsi", 00:39:52.294 "config": null 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "scheduler", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "framework_set_scheduler", 00:39:52.294 "params": { 00:39:52.294 "name": "static" 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "vhost_scsi", 00:39:52.294 "config": [] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "vhost_blk", 00:39:52.294 "config": [] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "ublk", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "ublk_create_target", 00:39:52.294 "params": { 00:39:52.294 "cpumask": "1" 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "ublk_start_disk", 00:39:52.294 "params": { 00:39:52.294 "bdev_name": "malloc0", 00:39:52.294 "ublk_id": 0, 00:39:52.294 "num_queues": 1, 00:39:52.294 "queue_depth": 128 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "nbd", 00:39:52.294 "config": [] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "nvmf", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "nvmf_set_config", 00:39:52.294 "params": { 00:39:52.294 "discovery_filter": "match_any", 00:39:52.294 "admin_cmd_passthru": { 00:39:52.294 "identify_ctrlr": false 00:39:52.294 }, 00:39:52.294 "dhchap_digests": [ 00:39:52.294 "sha256", 00:39:52.294 "sha384", 00:39:52.294 "sha512" 00:39:52.294 ], 00:39:52.294 "dhchap_dhgroups": [ 00:39:52.294 "null", 00:39:52.294 "ffdhe2048", 00:39:52.294 "ffdhe3072", 00:39:52.294 "ffdhe4096", 00:39:52.294 "ffdhe6144", 00:39:52.294 "ffdhe8192" 00:39:52.294 ] 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "nvmf_set_max_subsystems", 00:39:52.294 "params": { 00:39:52.294 "max_subsystems": 1024 00:39:52.294 } 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "nvmf_set_crdt", 00:39:52.294 "params": { 00:39:52.294 "crdt1": 0, 00:39:52.294 "crdt2": 0, 00:39:52.294 "crdt3": 0 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "subsystem": "iscsi", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "method": "iscsi_set_options", 00:39:52.294 "params": { 00:39:52.294 "node_base": "iqn.2016-06.io.spdk", 00:39:52.294 "max_sessions": 128, 00:39:52.294 "max_connections_per_session": 2, 00:39:52.294 "max_queue_depth": 64, 00:39:52.294 "default_time2wait": 2, 00:39:52.294 "default_time2retain": 20, 00:39:52.294 "first_burst_length": 8192, 00:39:52.294 "immediate_data": true, 00:39:52.294 "allow_duplicated_isid": false, 00:39:52.294 "error_recovery_level": 0, 00:39:52.294 "nop_timeout": 60, 00:39:52.294 "nop_in_interval": 30, 00:39:52.294 "disable_chap": false, 00:39:52.294 "require_chap": false, 00:39:52.294 "mutual_chap": false, 00:39:52.294 "chap_group": 0, 00:39:52.294 "max_large_datain_per_connection": 64, 00:39:52.294 "max_r2t_per_connection": 4, 00:39:52.294 "pdu_pool_size": 36864, 00:39:52.294 "immediate_data_pool_size": 16384, 00:39:52.294 "data_out_pool_size": 2048 00:39:52.294 } 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 }' 00:39:52.294 [2024-12-09 23:21:32.868267] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:52.294 [2024-12-09 23:21:32.868387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73651 ] 00:39:52.552 [2024-12-09 23:21:33.028539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.552 [2024-12-09 23:21:33.128113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.495 [2024-12-09 23:21:33.927019] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:39:53.495 [2024-12-09 23:21:33.927922] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:39:53.495 [2024-12-09 23:21:33.935159] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:39:53.495 [2024-12-09 23:21:33.935251] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:39:53.495 [2024-12-09 23:21:33.935262] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:39:53.495 [2024-12-09 23:21:33.935270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:39:53.495 [2024-12-09 23:21:33.944485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:39:53.495 [2024-12-09 23:21:33.944517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:39:53.495 [2024-12-09 23:21:33.951040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:39:53.495 [2024-12-09 23:21:33.951168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:39:53.495 [2024-12-09 23:21:33.968017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73651 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73651 ']' 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73651 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73651 00:39:53.495 killing process with pid 73651 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73651' 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73651 00:39:53.495 23:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73651 00:39:54.873 [2024-12-09 23:21:35.231097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:39:54.873 [2024-12-09 23:21:35.295031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:39:54.873 [2024-12-09 23:21:35.295145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:39:54.873 [2024-12-09 23:21:35.304025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:39:54.873 [2024-12-09 23:21:35.304241] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:39:54.873 [2024-12-09 23:21:35.307994] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:39:54.873 [2024-12-09 23:21:35.308027] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:39:54.873 [2024-12-09 23:21:35.308181] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:39:56.246 23:21:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:39:56.246 00:39:56.246 real 0m7.689s 00:39:56.246 user 0m5.315s 00:39:56.246 sys 0m3.020s 00:39:56.246 ************************************ 00:39:56.246 END TEST test_save_ublk_config 00:39:56.246 ************************************ 00:39:56.246 23:21:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.246 23:21:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:39:56.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.246 23:21:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73735 00:39:56.246 23:21:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:39:56.246 23:21:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:56.246 23:21:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73735 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@835 -- # '[' -z 73735 ']' 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.246 23:21:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:39:56.246 [2024-12-09 23:21:36.697390] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:56.246 [2024-12-09 23:21:36.697528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73735 ] 00:39:56.246 [2024-12-09 23:21:36.854203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:56.504 [2024-12-09 23:21:36.935601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.504 [2024-12-09 23:21:36.935787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.069 23:21:37 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.070 23:21:37 ublk -- common/autotest_common.sh@868 -- # return 0 00:39:57.070 23:21:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:39:57.070 23:21:37 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:57.070 23:21:37 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.070 23:21:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:39:57.070 ************************************ 00:39:57.070 START TEST test_create_ublk 00:39:57.070 ************************************ 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:39:57.070 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:39:57.070 [2024-12-09 23:21:37.542001] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:39:57.070 [2024-12-09 23:21:37.543584] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.070 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:39:57.070 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.070 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:39:57.070 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.070 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:39:57.328 [2024-12-09 23:21:37.714109] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:39:57.328 [2024-12-09 23:21:37.714411] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:39:57.328 [2024-12-09 23:21:37.714424] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:39:57.328 [2024-12-09 23:21:37.714431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:39:57.328 [2024-12-09 23:21:37.721067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:39:57.328 [2024-12-09 23:21:37.721086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:39:57.328 [2024-12-09 23:21:37.730007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:39:57.328 [2024-12-09 23:21:37.730516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:39:57.328 [2024-12-09 23:21:37.741026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:39:57.328 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:39:57.328 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.328 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:39:57.328 23:21:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:39:57.328 { 00:39:57.328 "ublk_device": "/dev/ublkb0", 00:39:57.328 "id": 0, 00:39:57.328 "queue_depth": 512, 00:39:57.328 "num_queues": 4, 00:39:57.328 "bdev_name": "Malloc0" 00:39:57.328 } 00:39:57.328 ]' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:39:57.328 23:21:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:39:57.586 fio: verification read phase will never start because write phase uses all of runtime 00:39:57.586 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:39:57.586 fio-3.35 00:39:57.586 Starting 1 process 00:40:07.564 00:40:07.564 fio_test: (groupid=0, jobs=1): err= 0: pid=73780: Mon Dec 9 23:21:48 2024 00:40:07.564 write: IOPS=20.7k, BW=80.9MiB/s (84.8MB/s)(809MiB/10001msec); 0 zone resets 00:40:07.564 clat (usec): min=32, max=3947, avg=47.52, stdev=84.87 00:40:07.564 lat (usec): min=33, max=3947, avg=47.96, stdev=84.89 00:40:07.564 clat percentiles (usec): 00:40:07.564 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:40:07.564 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:40:07.564 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 53], 95.00th=[ 58], 00:40:07.564 | 99.00th=[ 68], 99.50th=[ 76], 99.90th=[ 1434], 99.95th=[ 2606], 00:40:07.564 | 99.99th=[ 3490] 00:40:07.564 bw ( KiB/s): min=70984, max=88640, per=99.81%, avg=82670.32, stdev=5289.63, samples=19 00:40:07.564 iops : min=17746, max=22160, avg=20667.58, stdev=1322.41, samples=19 00:40:07.564 lat (usec) : 50=86.31%, 100=13.40%, 250=0.11%, 500=0.04%, 750=0.01% 00:40:07.564 lat (usec) : 1000=0.01% 00:40:07.564 lat (msec) : 2=0.05%, 4=0.07% 00:40:07.564 cpu : usr=2.89%, sys=14.10%, ctx=207091, majf=0, minf=795 00:40:07.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:07.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.564 issued rwts: total=0,207092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:07.564 00:40:07.564 Run status group 0 (all jobs): 00:40:07.564 WRITE: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=809MiB (848MB), run=10001-10001msec 00:40:07.564 00:40:07.564 Disk stats (read/write): 00:40:07.564 ublkb0: ios=0/204833, merge=0/0, ticks=0/8307, in_queue=8307, util=98.94% 00:40:07.564 23:21:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:07.564 [2024-12-09 23:21:48.142708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:07.564 [2024-12-09 23:21:48.172408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:07.564 [2024-12-09 23:21:48.173324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:07.564 [2024-12-09 23:21:48.189030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:07.564 [2024-12-09 23:21:48.190227] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:07.564 [2024-12-09 23:21:48.190289] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.564 23:21:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.564 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:07.825 [2024-12-09 23:21:48.204066] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:40:07.825 request: 00:40:07.825 { 00:40:07.825 "ublk_id": 0, 00:40:07.825 "method": "ublk_stop_disk", 00:40:07.825 "req_id": 1 00:40:07.825 } 00:40:07.825 Got JSON-RPC error response 00:40:07.825 response: 00:40:07.825 { 00:40:07.825 "code": -19, 00:40:07.825 "message": "No such device" 00:40:07.825 } 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:07.825 23:21:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:07.825 [2024-12-09 23:21:48.220059] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:40:07.825 [2024-12-09 23:21:48.223757] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:40:07.825 [2024-12-09 23:21:48.223787] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.825 23:21:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.825 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.085 23:21:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.085 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:40:08.085 23:21:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:40:08.085 ************************************ 00:40:08.086 END TEST test_create_ublk 00:40:08.086 ************************************ 00:40:08.086 23:21:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:40:08.086 00:40:08.086 real 0m11.144s 00:40:08.086 user 0m0.589s 00:40:08.086 sys 0m1.467s 00:40:08.086 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:08.086 23:21:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.086 23:21:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:40:08.086 23:21:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:08.086 23:21:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:08.086 23:21:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.086 ************************************ 00:40:08.086 START TEST test_create_multi_ublk 00:40:08.086 ************************************ 00:40:08.086 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:40:08.086 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:40:08.086 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.086 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.346 [2024-12-09 23:21:48.731991] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:40:08.346 [2024-12-09 23:21:48.733549] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.346 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.346 [2024-12-09 23:21:48.960102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:40:08.346 [2024-12-09 23:21:48.960406] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:40:08.346 [2024-12-09 23:21:48.960417] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:08.346 [2024-12-09 23:21:48.960426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:08.346 [2024-12-09 23:21:48.972047] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:08.346 [2024-12-09 23:21:48.972067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:08.604 [2024-12-09 23:21:48.984000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:08.604 [2024-12-09 23:21:48.984511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:08.604 [2024-12-09 23:21:48.997061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:08.605 23:21:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.605 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.605 [2024-12-09 23:21:49.220102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:40:08.605 [2024-12-09 23:21:49.220400] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:40:08.605 [2024-12-09 23:21:49.220412] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:40:08.605 [2024-12-09 23:21:49.220418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:40:08.605 [2024-12-09 23:21:49.229175] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:08.605 [2024-12-09 23:21:49.229192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:08.605 [2024-12-09 23:21:49.236013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:08.605 [2024-12-09 23:21:49.236506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:40:08.862 [2024-12-09 23:21:49.254015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:08.862 [2024-12-09 23:21:49.415087] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:40:08.862 [2024-12-09 23:21:49.415390] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:40:08.862 [2024-12-09 23:21:49.415403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:40:08.862 [2024-12-09 23:21:49.415409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:40:08.862 [2024-12-09 23:21:49.429006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:08.862 [2024-12-09 23:21:49.429027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:08.862 [2024-12-09 23:21:49.437003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:08.862 [2024-12-09 23:21:49.437518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:40:08.862 [2024-12-09 23:21:49.441578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.862 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.123 [2024-12-09 23:21:49.620104] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:40:09.123 [2024-12-09 23:21:49.620404] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:40:09.123 [2024-12-09 23:21:49.620417] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:40:09.123 [2024-12-09 23:21:49.620423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:40:09.123 [2024-12-09 23:21:49.628017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:09.123 [2024-12-09 23:21:49.628034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:09.123 [2024-12-09 23:21:49.636011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:09.123 [2024-12-09 23:21:49.636530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:40:09.123 [2024-12-09 23:21:49.640824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:40:09.123 { 00:40:09.123 "ublk_device": "/dev/ublkb0", 00:40:09.123 "id": 0, 00:40:09.123 "queue_depth": 512, 00:40:09.123 "num_queues": 4, 00:40:09.123 "bdev_name": "Malloc0" 00:40:09.123 }, 00:40:09.123 { 00:40:09.123 "ublk_device": "/dev/ublkb1", 00:40:09.123 "id": 1, 00:40:09.123 "queue_depth": 512, 00:40:09.123 "num_queues": 4, 00:40:09.123 "bdev_name": "Malloc1" 00:40:09.123 }, 00:40:09.123 { 00:40:09.123 "ublk_device": "/dev/ublkb2", 00:40:09.123 "id": 2, 00:40:09.123 "queue_depth": 512, 00:40:09.123 "num_queues": 4, 00:40:09.123 "bdev_name": "Malloc2" 00:40:09.123 }, 00:40:09.123 { 00:40:09.123 "ublk_device": "/dev/ublkb3", 00:40:09.123 "id": 3, 00:40:09.123 "queue_depth": 512, 00:40:09.123 "num_queues": 4, 00:40:09.123 "bdev_name": "Malloc3" 00:40:09.123 } 00:40:09.123 ]' 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:40:09.123 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.385 23:21:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:40:09.385 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:40:09.645 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.906 [2024-12-09 23:21:50.296091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:09.906 [2024-12-09 23:21:50.337033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:09.906 [2024-12-09 23:21:50.337728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:09.906 [2024-12-09 23:21:50.343007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:09.906 [2024-12-09 23:21:50.343258] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:09.906 [2024-12-09 23:21:50.343270] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.906 [2024-12-09 23:21:50.351075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:40:09.906 [2024-12-09 23:21:50.385500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:09.906 [2024-12-09 23:21:50.386482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:40:09.906 [2024-12-09 23:21:50.399002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:09.906 [2024-12-09 23:21:50.399215] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:40:09.906 [2024-12-09 23:21:50.399224] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.906 [2024-12-09 23:21:50.403157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:40:09.906 [2024-12-09 23:21:50.442030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:09.906 [2024-12-09 23:21:50.442664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:40:09.906 [2024-12-09 23:21:50.454001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:09.906 [2024-12-09 23:21:50.454221] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:40:09.906 [2024-12-09 23:21:50.454235] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:09.906 [2024-12-09 23:21:50.458151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:40:09.906 [2024-12-09 23:21:50.493024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:09.906 [2024-12-09 23:21:50.493625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:40:09.906 [2024-12-09 23:21:50.497201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:09.906 [2024-12-09 23:21:50.497421] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:40:09.906 [2024-12-09 23:21:50.497433] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.906 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:40:10.167 [2024-12-09 23:21:50.660057] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:40:10.167 [2024-12-09 23:21:50.663719] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:40:10.167 [2024-12-09 23:21:50.663748] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:40:10.167 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:40:10.167 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:10.167 23:21:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:10.167 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.167 23:21:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:10.427 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.427 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:10.427 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:40:10.427 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.427 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:10.997 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.997 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:10.997 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:40:10.997 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.997 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:40:11.258 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:40:11.520 ************************************ 00:40:11.520 END TEST test_create_multi_ublk 00:40:11.520 ************************************ 00:40:11.520 23:21:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:40:11.520 00:40:11.520 real 0m3.180s 00:40:11.520 user 0m0.781s 00:40:11.520 sys 0m0.128s 00:40:11.520 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.520 23:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:40:11.520 23:21:51 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:11.520 23:21:51 ublk -- ublk/ublk.sh@147 -- # cleanup 00:40:11.520 23:21:51 ublk -- ublk/ublk.sh@130 -- # killprocess 73735 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@954 -- # '[' -z 73735 ']' 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@958 -- # kill -0 73735 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@959 -- # uname 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73735 00:40:11.520 killing process with pid 73735 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73735' 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@973 -- # kill 73735 00:40:11.520 23:21:51 ublk -- common/autotest_common.sh@978 -- # wait 73735 00:40:12.092 [2024-12-09 23:21:52.483447] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:40:12.092 [2024-12-09 23:21:52.483492] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:40:12.662 00:40:12.662 real 0m24.415s 00:40:12.662 user 0m34.579s 00:40:12.662 sys 0m9.679s 00:40:12.662 23:21:53 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.662 23:21:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:40:12.662 ************************************ 00:40:12.662 END TEST ublk 00:40:12.662 ************************************ 00:40:12.662 23:21:53 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:40:12.662 23:21:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:12.662 23:21:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.662 23:21:53 -- common/autotest_common.sh@10 -- # set +x 00:40:12.662 ************************************ 00:40:12.662 START TEST ublk_recovery 00:40:12.662 ************************************ 00:40:12.662 23:21:53 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:40:12.662 * Looking for test storage... 00:40:12.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:40:12.662 23:21:53 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:12.662 23:21:53 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:40:12.662 23:21:53 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.923 23:21:53 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.923 --rc genhtml_branch_coverage=1 00:40:12.923 --rc genhtml_function_coverage=1 00:40:12.923 --rc genhtml_legend=1 00:40:12.923 --rc geninfo_all_blocks=1 00:40:12.923 --rc geninfo_unexecuted_blocks=1 00:40:12.923 00:40:12.923 ' 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.923 --rc genhtml_branch_coverage=1 00:40:12.923 --rc genhtml_function_coverage=1 00:40:12.923 --rc genhtml_legend=1 00:40:12.923 --rc geninfo_all_blocks=1 00:40:12.923 --rc geninfo_unexecuted_blocks=1 00:40:12.923 00:40:12.923 ' 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.923 --rc genhtml_branch_coverage=1 00:40:12.923 --rc genhtml_function_coverage=1 00:40:12.923 --rc genhtml_legend=1 00:40:12.923 --rc geninfo_all_blocks=1 00:40:12.923 --rc geninfo_unexecuted_blocks=1 00:40:12.923 00:40:12.923 ' 00:40:12.923 23:21:53 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.923 --rc genhtml_branch_coverage=1 00:40:12.923 --rc genhtml_function_coverage=1 00:40:12.923 --rc genhtml_legend=1 00:40:12.923 --rc geninfo_all_blocks=1 00:40:12.923 --rc geninfo_unexecuted_blocks=1 00:40:12.923 00:40:12.923 ' 00:40:12.923 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:40:12.923 23:21:53 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:40:12.923 23:21:53 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:40:12.923 23:21:53 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:40:12.924 23:21:53 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:40:12.924 23:21:53 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:40:12.924 23:21:53 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:40:12.924 23:21:53 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:40:12.924 23:21:53 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:40:12.924 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:40:12.924 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74138 00:40:12.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.924 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:12.924 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74138 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74138 ']' 00:40:12.924 23:21:53 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.924 23:21:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.924 [2024-12-09 23:21:53.436935] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:12.924 [2024-12-09 23:21:53.437071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74138 ] 00:40:13.185 [2024-12-09 23:21:53.595910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:13.185 [2024-12-09 23:21:53.699536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.185 [2024-12-09 23:21:53.699617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:40:13.756 23:21:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.756 [2024-12-09 23:21:54.293003] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:40:13.756 [2024-12-09 23:21:54.294826] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.756 23:21:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.756 23:21:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.016 malloc0 00:40:14.016 23:21:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.016 23:21:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:40:14.016 23:21:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.016 23:21:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.016 [2024-12-09 23:21:54.397129] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:40:14.016 [2024-12-09 23:21:54.397225] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:40:14.016 [2024-12-09 23:21:54.397236] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:40:14.016 [2024-12-09 23:21:54.397242] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:40:14.016 [2024-12-09 23:21:54.405021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:14.016 [2024-12-09 23:21:54.405042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:14.016 [2024-12-09 23:21:54.413010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:14.016 [2024-12-09 23:21:54.413143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:40:14.016 [2024-12-09 23:21:54.430019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:40:14.016 1 00:40:14.016 23:21:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.016 23:21:54 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:40:14.952 23:21:55 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74173 00:40:14.952 23:21:55 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:40:14.952 23:21:55 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:40:14.952 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:14.952 fio-3.35 00:40:14.952 Starting 1 process 00:40:20.230 23:22:00 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74138 00:40:20.230 23:22:00 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:40:25.504 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74138 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:40:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:25.504 23:22:05 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74278 00:40:25.504 23:22:05 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:25.504 23:22:05 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74278 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74278 ']' 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:25.504 23:22:05 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:25.504 23:22:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.504 [2024-12-09 23:22:05.531611] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:25.504 [2024-12-09 23:22:05.531724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74278 ] 00:40:25.505 [2024-12-09 23:22:05.690468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:25.505 [2024-12-09 23:22:05.793811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:25.505 [2024-12-09 23:22:05.793906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.762 23:22:06 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:25.762 23:22:06 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:40:25.762 23:22:06 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:40:25.762 23:22:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.763 23:22:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.763 [2024-12-09 23:22:06.390007] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:40:25.763 [2024-12-09 23:22:06.391839] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:25.763 23:22:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.763 23:22:06 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:40:25.763 23:22:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.763 23:22:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.023 malloc0 00:40:26.023 23:22:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.023 23:22:06 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:40:26.023 23:22:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.023 23:22:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.023 [2024-12-09 23:22:06.494120] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:40:26.023 [2024-12-09 23:22:06.494157] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:40:26.023 [2024-12-09 23:22:06.494167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:40:26.023 [2024-12-09 23:22:06.502024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:40:26.023 [2024-12-09 23:22:06.502048] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:40:26.023 1 00:40:26.023 23:22:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.023 23:22:06 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74173 00:40:26.960 [2024-12-09 23:22:07.502084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:40:26.960 [2024-12-09 23:22:07.509020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:40:26.960 [2024-12-09 23:22:07.509043] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:40:27.898 [2024-12-09 23:22:08.509073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:40:27.898 [2024-12-09 23:22:08.514004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:40:27.898 [2024-12-09 23:22:08.514025] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:40:29.280 [2024-12-09 23:22:09.514057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:40:29.280 [2024-12-09 23:22:09.519014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:40:29.280 [2024-12-09 23:22:09.519034] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:40:29.280 [2024-12-09 23:22:09.519045] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:40:29.280 [2024-12-09 23:22:09.519134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:40:51.286 [2024-12-09 23:22:30.718024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:40:51.286 [2024-12-09 23:22:30.721935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:40:51.286 [2024-12-09 23:22:30.732154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:40:51.286 [2024-12-09 23:22:30.732173] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:41:17.835 00:41:17.835 fio_test: (groupid=0, jobs=1): err= 0: pid=74176: Mon Dec 9 23:22:55 2024 00:41:17.835 read: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(3342MiB/60003msec) 00:41:17.835 slat (nsec): min=1171, max=191871, avg=5328.80, stdev=1704.29 00:41:17.835 clat (usec): min=1142, max=30297k, avg=4088.81, stdev=240678.24 00:41:17.835 lat (usec): min=1152, max=30297k, avg=4094.14, stdev=240678.24 00:41:17.835 clat percentiles (usec): 00:41:17.835 | 1.00th=[ 1762], 5.00th=[ 1860], 10.00th=[ 1909], 20.00th=[ 1975], 00:41:17.835 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2073], 00:41:17.835 | 70.00th=[ 2089], 80.00th=[ 2114], 90.00th=[ 2343], 95.00th=[ 3195], 00:41:17.835 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 7111], 99.95th=[ 7898], 00:41:17.835 | 99.99th=[13042] 00:41:17.835 bw ( KiB/s): min=49928, max=127768, per=100.00%, avg=114144.22, stdev=13342.80, samples=59 00:41:17.835 iops : min=12482, max=31942, avg=28536.05, stdev=3335.70, samples=59 00:41:17.835 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(3337MiB/60003msec); 0 zone resets 00:41:17.835 slat (nsec): min=1187, max=311960, avg=5484.98, stdev=1806.23 00:41:17.835 clat (usec): min=671, max=30297k, avg=4884.11, stdev=281934.25 00:41:17.835 lat (usec): min=675, max=30297k, avg=4889.59, stdev=281934.25 00:41:17.835 clat percentiles (usec): 00:41:17.835 | 1.00th=[ 1811], 5.00th=[ 1942], 10.00th=[ 1991], 20.00th=[ 2057], 00:41:17.835 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:41:17.835 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2409], 95.00th=[ 3163], 00:41:17.835 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[ 7963], 00:41:17.835 | 99.99th=[13173] 00:41:17.835 bw ( KiB/s): min=50176, max=126952, per=100.00%, avg=113995.90, stdev=13373.97, samples=59 00:41:17.835 iops : min=12544, max=31738, avg=28498.97, stdev=3343.49, samples=59 00:41:17.835 lat (usec) : 750=0.01% 00:41:17.835 lat (msec) : 2=20.33%, 4=76.73%, 10=2.91%, 20=0.03%, >=2000=0.01% 00:41:17.835 cpu : usr=3.06%, sys=15.77%, ctx=57047, majf=0, minf=13 00:41:17.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:41:17.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:17.835 issued rwts: total=855446,854281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:17.835 00:41:17.835 Run status group 0 (all jobs): 00:41:17.835 READ: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=3342MiB (3504MB), run=60003-60003msec 00:41:17.835 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=3337MiB (3499MB), run=60003-60003msec 00:41:17.835 00:41:17.835 Disk stats (read/write): 00:41:17.835 ublkb1: ios=852232/851154, merge=0/0, ticks=3443102/4046579, in_queue=7489682, util=99.89% 00:41:17.835 23:22:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:41:17.835 23:22:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.835 23:22:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:17.835 [2024-12-09 23:22:55.691117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:41:17.835 [2024-12-09 23:22:55.731004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:17.835 [2024-12-09 23:22:55.731159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:41:17.835 [2024-12-09 23:22:55.739014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:17.835 [2024-12-09 23:22:55.739106] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:41:17.835 [2024-12-09 23:22:55.739113] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:41:17.835 23:22:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.835 23:22:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:41:17.835 23:22:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.835 23:22:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:17.835 [2024-12-09 23:22:55.755090] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:17.836 [2024-12-09 23:22:55.758908] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:17.836 [2024-12-09 23:22:55.758942] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.836 23:22:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:17.836 23:22:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:41:17.836 23:22:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74278 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74278 ']' 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74278 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74278 00:41:17.836 killing process with pid 74278 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74278' 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74278 00:41:17.836 23:22:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74278 00:41:17.836 [2024-12-09 23:22:56.966672] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:17.836 [2024-12-09 23:22:56.966730] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:17.836 ************************************ 00:41:17.836 END TEST ublk_recovery 00:41:17.836 ************************************ 00:41:17.836 00:41:17.836 real 1m4.816s 00:41:17.836 user 1m47.602s 00:41:17.836 sys 0m22.530s 00:41:17.836 23:22:58 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.836 23:22:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:17.836 23:22:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:41:17.836 23:22:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:41:17.836 23:22:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:17.836 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:41:17.836 23:22:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:41:17.836 23:22:58 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:41:17.836 23:22:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:17.836 23:22:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.836 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:41:17.836 ************************************ 00:41:17.836 START TEST ftl 00:41:17.836 ************************************ 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:41:17.836 * Looking for test storage... 00:41:17.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:17.836 23:22:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:17.836 23:22:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:41:17.836 23:22:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:41:17.836 23:22:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:41:17.836 23:22:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:17.836 23:22:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:41:17.836 23:22:58 ftl -- scripts/common.sh@345 -- # : 1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:17.836 23:22:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:17.836 23:22:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@353 -- # local d=1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:17.836 23:22:58 ftl -- scripts/common.sh@355 -- # echo 1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:41:17.836 23:22:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@353 -- # local d=2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:17.836 23:22:58 ftl -- scripts/common.sh@355 -- # echo 2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:41:17.836 23:22:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:17.836 23:22:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:17.836 23:22:58 ftl -- scripts/common.sh@368 -- # return 0 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:17.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.836 --rc genhtml_branch_coverage=1 00:41:17.836 --rc genhtml_function_coverage=1 00:41:17.836 --rc genhtml_legend=1 00:41:17.836 --rc geninfo_all_blocks=1 00:41:17.836 --rc geninfo_unexecuted_blocks=1 00:41:17.836 00:41:17.836 ' 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:17.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.836 --rc genhtml_branch_coverage=1 00:41:17.836 --rc genhtml_function_coverage=1 00:41:17.836 --rc genhtml_legend=1 00:41:17.836 --rc geninfo_all_blocks=1 00:41:17.836 --rc geninfo_unexecuted_blocks=1 00:41:17.836 00:41:17.836 ' 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:17.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.836 --rc genhtml_branch_coverage=1 00:41:17.836 --rc genhtml_function_coverage=1 00:41:17.836 --rc genhtml_legend=1 00:41:17.836 --rc geninfo_all_blocks=1 00:41:17.836 --rc geninfo_unexecuted_blocks=1 00:41:17.836 00:41:17.836 ' 00:41:17.836 23:22:58 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:17.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.836 --rc genhtml_branch_coverage=1 00:41:17.836 --rc genhtml_function_coverage=1 00:41:17.836 --rc genhtml_legend=1 00:41:17.836 --rc geninfo_all_blocks=1 00:41:17.836 --rc geninfo_unexecuted_blocks=1 00:41:17.836 00:41:17.836 ' 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:17.836 23:22:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:41:17.836 23:22:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:17.836 23:22:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:17.836 23:22:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:17.836 23:22:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:17.836 23:22:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:17.836 23:22:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:17.836 23:22:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:17.836 23:22:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:17.836 23:22:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:17.836 23:22:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:17.836 23:22:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:17.836 23:22:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:17.836 23:22:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:17.836 23:22:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:17.836 23:22:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:17.836 23:22:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:17.836 23:22:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:17.836 23:22:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:17.836 23:22:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:17.836 23:22:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:17.836 23:22:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.836 23:22:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:41:17.836 23:22:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:18.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:18.097 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.097 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.097 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.097 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.097 23:22:58 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75089 00:41:18.097 23:22:58 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75089 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@835 -- # '[' -z 75089 ']' 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.097 23:22:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:18.097 23:22:58 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:41:18.358 [2024-12-09 23:22:58.792249] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:18.358 [2024-12-09 23:22:58.792374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75089 ] 00:41:18.358 [2024-12-09 23:22:58.952549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.619 [2024-12-09 23:22:59.060260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.190 23:22:59 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:19.190 23:22:59 ftl -- common/autotest_common.sh@868 -- # return 0 00:41:19.190 23:22:59 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:41:19.190 23:22:59 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:41:20.131 23:23:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:20.131 23:23:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@50 -- # break 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:41:20.701 23:23:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:41:20.961 23:23:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:41:20.961 23:23:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:41:20.961 23:23:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:41:20.961 23:23:01 ftl -- ftl/ftl.sh@63 -- # break 00:41:20.961 23:23:01 ftl -- ftl/ftl.sh@66 -- # killprocess 75089 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 75089 ']' 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@958 -- # kill -0 75089 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@959 -- # uname 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75089 00:41:20.961 killing process with pid 75089 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75089' 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@973 -- # kill 75089 00:41:20.961 23:23:01 ftl -- common/autotest_common.sh@978 -- # wait 75089 00:41:22.869 23:23:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:41:22.869 23:23:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:41:22.869 23:23:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:22.869 23:23:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.870 23:23:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:22.870 ************************************ 00:41:22.870 START TEST ftl_fio_basic 00:41:22.870 ************************************ 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:41:22.870 * Looking for test storage... 00:41:22.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:22.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.870 --rc genhtml_branch_coverage=1 00:41:22.870 --rc genhtml_function_coverage=1 00:41:22.870 --rc genhtml_legend=1 00:41:22.870 --rc geninfo_all_blocks=1 00:41:22.870 --rc geninfo_unexecuted_blocks=1 00:41:22.870 00:41:22.870 ' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:22.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.870 --rc genhtml_branch_coverage=1 00:41:22.870 --rc genhtml_function_coverage=1 00:41:22.870 --rc genhtml_legend=1 00:41:22.870 --rc geninfo_all_blocks=1 00:41:22.870 --rc geninfo_unexecuted_blocks=1 00:41:22.870 00:41:22.870 ' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:22.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.870 --rc genhtml_branch_coverage=1 00:41:22.870 --rc genhtml_function_coverage=1 00:41:22.870 --rc genhtml_legend=1 00:41:22.870 --rc geninfo_all_blocks=1 00:41:22.870 --rc geninfo_unexecuted_blocks=1 00:41:22.870 00:41:22.870 ' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:22.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.870 --rc genhtml_branch_coverage=1 00:41:22.870 --rc genhtml_function_coverage=1 00:41:22.870 --rc genhtml_legend=1 00:41:22.870 --rc geninfo_all_blocks=1 00:41:22.870 --rc geninfo_unexecuted_blocks=1 00:41:22.870 00:41:22.870 ' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75221 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75221 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75221 ']' 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:22.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:22.870 23:23:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:41:22.870 [2024-12-09 23:23:03.335583] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:22.870 [2024-12-09 23:23:03.335833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75221 ] 00:41:22.870 [2024-12-09 23:23:03.494730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:23.128 [2024-12-09 23:23:03.586580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.128 [2024-12-09 23:23:03.586865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:23.128 [2024-12-09 23:23:03.586837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:41:23.694 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:41:23.952 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:24.245 { 00:41:24.245 "name": "nvme0n1", 00:41:24.245 "aliases": [ 00:41:24.245 "84e0196e-5b69-4fc7-9f42-5617017a4513" 00:41:24.245 ], 00:41:24.245 "product_name": "NVMe disk", 00:41:24.245 "block_size": 4096, 00:41:24.245 "num_blocks": 1310720, 00:41:24.245 "uuid": "84e0196e-5b69-4fc7-9f42-5617017a4513", 00:41:24.245 "numa_id": -1, 00:41:24.245 "assigned_rate_limits": { 00:41:24.245 "rw_ios_per_sec": 0, 00:41:24.245 "rw_mbytes_per_sec": 0, 00:41:24.245 "r_mbytes_per_sec": 0, 00:41:24.245 "w_mbytes_per_sec": 0 00:41:24.245 }, 00:41:24.245 "claimed": false, 00:41:24.245 "zoned": false, 00:41:24.245 "supported_io_types": { 00:41:24.245 "read": true, 00:41:24.245 "write": true, 00:41:24.245 "unmap": true, 00:41:24.245 "flush": true, 00:41:24.245 "reset": true, 00:41:24.245 "nvme_admin": true, 00:41:24.245 "nvme_io": true, 00:41:24.245 "nvme_io_md": false, 00:41:24.245 "write_zeroes": true, 00:41:24.245 "zcopy": false, 00:41:24.245 "get_zone_info": false, 00:41:24.245 "zone_management": false, 00:41:24.245 "zone_append": false, 00:41:24.245 "compare": true, 00:41:24.245 "compare_and_write": false, 00:41:24.245 "abort": true, 00:41:24.245 "seek_hole": false, 00:41:24.245 "seek_data": false, 00:41:24.245 "copy": true, 00:41:24.245 "nvme_iov_md": false 00:41:24.245 }, 00:41:24.245 "driver_specific": { 00:41:24.245 "nvme": [ 00:41:24.245 { 00:41:24.245 "pci_address": "0000:00:11.0", 00:41:24.245 "trid": { 00:41:24.245 "trtype": "PCIe", 00:41:24.245 "traddr": "0000:00:11.0" 00:41:24.245 }, 00:41:24.245 "ctrlr_data": { 00:41:24.245 "cntlid": 0, 00:41:24.245 "vendor_id": "0x1b36", 00:41:24.245 "model_number": "QEMU NVMe Ctrl", 00:41:24.245 "serial_number": "12341", 00:41:24.245 "firmware_revision": "8.0.0", 00:41:24.245 "subnqn": "nqn.2019-08.org.qemu:12341", 00:41:24.245 "oacs": { 00:41:24.245 "security": 0, 00:41:24.245 "format": 1, 00:41:24.245 "firmware": 0, 00:41:24.245 "ns_manage": 1 00:41:24.245 }, 00:41:24.245 "multi_ctrlr": false, 00:41:24.245 "ana_reporting": false 00:41:24.245 }, 00:41:24.245 "vs": { 00:41:24.245 "nvme_version": "1.4" 00:41:24.245 }, 00:41:24.245 "ns_data": { 00:41:24.245 "id": 1, 00:41:24.245 "can_share": false 00:41:24.245 } 00:41:24.245 } 00:41:24.245 ], 00:41:24.245 "mp_policy": "active_passive" 00:41:24.245 } 00:41:24.245 } 00:41:24.245 ]' 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:24.245 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:24.524 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:41:24.524 23:23:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:41:24.524 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7147723f-6860-4622-b017-057f07f39980 00:41:24.524 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7147723f-6860-4622-b017-057f07f39980 00:41:24.782 23:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:24.782 23:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:24.782 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:41:24.782 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:41:24.782 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:41:24.783 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.041 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:25.041 { 00:41:25.041 "name": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:25.041 "aliases": [ 00:41:25.041 "lvs/nvme0n1p0" 00:41:25.041 ], 00:41:25.041 "product_name": "Logical Volume", 00:41:25.041 "block_size": 4096, 00:41:25.041 "num_blocks": 26476544, 00:41:25.041 "uuid": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:25.041 "assigned_rate_limits": { 00:41:25.041 "rw_ios_per_sec": 0, 00:41:25.041 "rw_mbytes_per_sec": 0, 00:41:25.041 "r_mbytes_per_sec": 0, 00:41:25.041 "w_mbytes_per_sec": 0 00:41:25.041 }, 00:41:25.041 "claimed": false, 00:41:25.041 "zoned": false, 00:41:25.041 "supported_io_types": { 00:41:25.041 "read": true, 00:41:25.041 "write": true, 00:41:25.041 "unmap": true, 00:41:25.041 "flush": false, 00:41:25.041 "reset": true, 00:41:25.041 "nvme_admin": false, 00:41:25.041 "nvme_io": false, 00:41:25.041 "nvme_io_md": false, 00:41:25.041 "write_zeroes": true, 00:41:25.041 "zcopy": false, 00:41:25.041 "get_zone_info": false, 00:41:25.041 "zone_management": false, 00:41:25.041 "zone_append": false, 00:41:25.041 "compare": false, 00:41:25.041 "compare_and_write": false, 00:41:25.041 "abort": false, 00:41:25.041 "seek_hole": true, 00:41:25.041 "seek_data": true, 00:41:25.041 "copy": false, 00:41:25.041 "nvme_iov_md": false 00:41:25.041 }, 00:41:25.041 "driver_specific": { 00:41:25.041 "lvol": { 00:41:25.041 "lvol_store_uuid": "7147723f-6860-4622-b017-057f07f39980", 00:41:25.041 "base_bdev": "nvme0n1", 00:41:25.042 "thin_provision": true, 00:41:25.042 "num_allocated_clusters": 0, 00:41:25.042 "snapshot": false, 00:41:25.042 "clone": false, 00:41:25.042 "esnap_clone": false 00:41:25.042 } 00:41:25.042 } 00:41:25.042 } 00:41:25.042 ]' 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:41:25.042 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:41:25.300 23:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.558 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:25.558 { 00:41:25.558 "name": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:25.558 "aliases": [ 00:41:25.558 "lvs/nvme0n1p0" 00:41:25.558 ], 00:41:25.558 "product_name": "Logical Volume", 00:41:25.558 "block_size": 4096, 00:41:25.558 "num_blocks": 26476544, 00:41:25.558 "uuid": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:25.558 "assigned_rate_limits": { 00:41:25.558 "rw_ios_per_sec": 0, 00:41:25.558 "rw_mbytes_per_sec": 0, 00:41:25.558 "r_mbytes_per_sec": 0, 00:41:25.558 "w_mbytes_per_sec": 0 00:41:25.558 }, 00:41:25.558 "claimed": false, 00:41:25.558 "zoned": false, 00:41:25.558 "supported_io_types": { 00:41:25.558 "read": true, 00:41:25.558 "write": true, 00:41:25.558 "unmap": true, 00:41:25.558 "flush": false, 00:41:25.558 "reset": true, 00:41:25.558 "nvme_admin": false, 00:41:25.558 "nvme_io": false, 00:41:25.558 "nvme_io_md": false, 00:41:25.558 "write_zeroes": true, 00:41:25.558 "zcopy": false, 00:41:25.558 "get_zone_info": false, 00:41:25.558 "zone_management": false, 00:41:25.558 "zone_append": false, 00:41:25.558 "compare": false, 00:41:25.558 "compare_and_write": false, 00:41:25.558 "abort": false, 00:41:25.558 "seek_hole": true, 00:41:25.558 "seek_data": true, 00:41:25.558 "copy": false, 00:41:25.558 "nvme_iov_md": false 00:41:25.558 }, 00:41:25.558 "driver_specific": { 00:41:25.558 "lvol": { 00:41:25.558 "lvol_store_uuid": "7147723f-6860-4622-b017-057f07f39980", 00:41:25.558 "base_bdev": "nvme0n1", 00:41:25.558 "thin_provision": true, 00:41:25.558 "num_allocated_clusters": 0, 00:41:25.558 "snapshot": false, 00:41:25.558 "clone": false, 00:41:25.558 "esnap_clone": false 00:41:25.558 } 00:41:25.558 } 00:41:25.558 } 00:41:25.558 ]' 00:41:25.558 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:25.558 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:41:25.559 23:23:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:41:25.817 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:41:25.817 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:26.075 { 00:41:26.075 "name": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:26.075 "aliases": [ 00:41:26.075 "lvs/nvme0n1p0" 00:41:26.075 ], 00:41:26.075 "product_name": "Logical Volume", 00:41:26.075 "block_size": 4096, 00:41:26.075 "num_blocks": 26476544, 00:41:26.075 "uuid": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:26.075 "assigned_rate_limits": { 00:41:26.075 "rw_ios_per_sec": 0, 00:41:26.075 "rw_mbytes_per_sec": 0, 00:41:26.075 "r_mbytes_per_sec": 0, 00:41:26.075 "w_mbytes_per_sec": 0 00:41:26.075 }, 00:41:26.075 "claimed": false, 00:41:26.075 "zoned": false, 00:41:26.075 "supported_io_types": { 00:41:26.075 "read": true, 00:41:26.075 "write": true, 00:41:26.075 "unmap": true, 00:41:26.075 "flush": false, 00:41:26.075 "reset": true, 00:41:26.075 "nvme_admin": false, 00:41:26.075 "nvme_io": false, 00:41:26.075 "nvme_io_md": false, 00:41:26.075 "write_zeroes": true, 00:41:26.075 "zcopy": false, 00:41:26.075 "get_zone_info": false, 00:41:26.075 "zone_management": false, 00:41:26.075 "zone_append": false, 00:41:26.075 "compare": false, 00:41:26.075 "compare_and_write": false, 00:41:26.075 "abort": false, 00:41:26.075 "seek_hole": true, 00:41:26.075 "seek_data": true, 00:41:26.075 "copy": false, 00:41:26.075 "nvme_iov_md": false 00:41:26.075 }, 00:41:26.075 "driver_specific": { 00:41:26.075 "lvol": { 00:41:26.075 "lvol_store_uuid": "7147723f-6860-4622-b017-057f07f39980", 00:41:26.075 "base_bdev": "nvme0n1", 00:41:26.075 "thin_provision": true, 00:41:26.075 "num_allocated_clusters": 0, 00:41:26.075 "snapshot": false, 00:41:26.075 "clone": false, 00:41:26.075 "esnap_clone": false 00:41:26.075 } 00:41:26.075 } 00:41:26.075 } 00:41:26.075 ]' 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:41:26.075 23:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467 -c nvc0n1p0 --l2p_dram_limit 60 00:41:26.334 [2024-12-09 23:23:06.809069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.809116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:26.334 [2024-12-09 23:23:06.809130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:26.334 [2024-12-09 23:23:06.809137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.809181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.809192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:26.334 [2024-12-09 23:23:06.809200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:41:26.334 [2024-12-09 23:23:06.809207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.809235] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:26.334 [2024-12-09 23:23:06.809752] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:26.334 [2024-12-09 23:23:06.809770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.809776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:26.334 [2024-12-09 23:23:06.809785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:41:26.334 [2024-12-09 23:23:06.809792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.809823] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cd71ef5d-9f54-4a91-8b47-611fdcff16d0 00:41:26.334 [2024-12-09 23:23:06.811201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.811322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:41:26.334 [2024-12-09 23:23:06.811337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:41:26.334 [2024-12-09 23:23:06.811347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.818302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.818388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:26.334 [2024-12-09 23:23:06.818431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:41:26.334 [2024-12-09 23:23:06.818450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.818547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.818569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:26.334 [2024-12-09 23:23:06.818586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:41:26.334 [2024-12-09 23:23:06.818605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.818657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.818679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:26.334 [2024-12-09 23:23:06.818695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:26.334 [2024-12-09 23:23:06.818811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.818851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:26.334 [2024-12-09 23:23:06.822123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.822205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:26.334 [2024-12-09 23:23:06.822250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:41:26.334 [2024-12-09 23:23:06.822270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.822315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.822332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:26.334 [2024-12-09 23:23:06.822349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:26.334 [2024-12-09 23:23:06.822364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.822399] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:41:26.334 [2024-12-09 23:23:06.822574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:26.334 [2024-12-09 23:23:06.822711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:26.334 [2024-12-09 23:23:06.822806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:26.334 [2024-12-09 23:23:06.822836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:26.334 [2024-12-09 23:23:06.822860] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:26.334 [2024-12-09 23:23:06.822887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:26.334 [2024-12-09 23:23:06.822937] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:26.334 [2024-12-09 23:23:06.822956] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:26.334 [2024-12-09 23:23:06.822972] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:26.334 [2024-12-09 23:23:06.823003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.823022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:26.334 [2024-12-09 23:23:06.823039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:41:26.334 [2024-12-09 23:23:06.823081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.823167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.334 [2024-12-09 23:23:06.823187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:26.334 [2024-12-09 23:23:06.823204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:41:26.334 [2024-12-09 23:23:06.823269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.334 [2024-12-09 23:23:06.823378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:26.334 [2024-12-09 23:23:06.823414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:26.334 [2024-12-09 23:23:06.823434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:26.334 [2024-12-09 23:23:06.823476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.334 [2024-12-09 23:23:06.823496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:26.334 [2024-12-09 23:23:06.823510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:26.335 [2024-12-09 23:23:06.823542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:26.335 [2024-12-09 23:23:06.823560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:26.335 [2024-12-09 23:23:06.823589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:26.335 [2024-12-09 23:23:06.823603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:26.335 [2024-12-09 23:23:06.823618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:26.335 [2024-12-09 23:23:06.823633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:26.335 [2024-12-09 23:23:06.823691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:26.335 [2024-12-09 23:23:06.823708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:26.335 [2024-12-09 23:23:06.823740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:26.335 [2024-12-09 23:23:06.823756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:26.335 [2024-12-09 23:23:06.823786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:26.335 [2024-12-09 23:23:06.823817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:26.335 [2024-12-09 23:23:06.823894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:26.335 [2024-12-09 23:23:06.823928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:26.335 [2024-12-09 23:23:06.823944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:26.335 [2024-12-09 23:23:06.823957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:26.335 [2024-12-09 23:23:06.823973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:26.335 [2024-12-09 23:23:06.823997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:26.335 [2024-12-09 23:23:06.824045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:26.335 [2024-12-09 23:23:06.824060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:26.335 [2024-12-09 23:23:06.824085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:26.335 [2024-12-09 23:23:06.824114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:26.335 [2024-12-09 23:23:06.824130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:26.335 [2024-12-09 23:23:06.824144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:26.335 [2024-12-09 23:23:06.824159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:26.335 [2024-12-09 23:23:06.824175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:26.335 [2024-12-09 23:23:06.824190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:26.335 [2024-12-09 23:23:06.824204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.824220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:26.335 [2024-12-09 23:23:06.824233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:26.335 [2024-12-09 23:23:06.824291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.824308] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:26.335 [2024-12-09 23:23:06.824325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:26.335 [2024-12-09 23:23:06.824340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:26.335 [2024-12-09 23:23:06.824358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:26.335 [2024-12-09 23:23:06.824372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:26.335 [2024-12-09 23:23:06.824423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:26.335 [2024-12-09 23:23:06.824440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:26.335 [2024-12-09 23:23:06.824481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:26.335 [2024-12-09 23:23:06.824499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:26.335 [2024-12-09 23:23:06.824516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:26.335 [2024-12-09 23:23:06.824532] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:26.335 [2024-12-09 23:23:06.824558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.824582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:26.335 [2024-12-09 23:23:06.824605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:26.335 [2024-12-09 23:23:06.824627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:26.335 [2024-12-09 23:23:06.824650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:26.335 [2024-12-09 23:23:06.824725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:26.335 [2024-12-09 23:23:06.824751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:26.335 [2024-12-09 23:23:06.824774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:26.335 [2024-12-09 23:23:06.824829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:26.335 [2024-12-09 23:23:06.824852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:26.335 [2024-12-09 23:23:06.824879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.824929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.824956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.825102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.825247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:26.335 [2024-12-09 23:23:06.825273] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:26.335 [2024-12-09 23:23:06.825299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.825324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:26.335 [2024-12-09 23:23:06.825347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:26.335 [2024-12-09 23:23:06.825436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:26.335 [2024-12-09 23:23:06.825462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:26.335 [2024-12-09 23:23:06.825486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.335 [2024-12-09 23:23:06.825504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:26.335 [2024-12-09 23:23:06.825608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:41:26.335 [2024-12-09 23:23:06.825629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.335 [2024-12-09 23:23:06.825687] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:41:26.335 [2024-12-09 23:23:06.825717] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:41:30.517 [2024-12-09 23:23:10.825778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.826012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:41:30.517 [2024-12-09 23:23:10.826150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4000.073 ms 00:41:30.517 [2024-12-09 23:23:10.826181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.854142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.854308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:30.517 [2024-12-09 23:23:10.854375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.693 ms 00:41:30.517 [2024-12-09 23:23:10.854401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.854550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.854731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:30.517 [2024-12-09 23:23:10.854757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:41:30.517 [2024-12-09 23:23:10.854782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.900029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.900194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:30.517 [2024-12-09 23:23:10.900262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.186 ms 00:41:30.517 [2024-12-09 23:23:10.900291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.900347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.900373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:30.517 [2024-12-09 23:23:10.900393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:30.517 [2024-12-09 23:23:10.900415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.900867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.900977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:30.517 [2024-12-09 23:23:10.901048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:41:30.517 [2024-12-09 23:23:10.901077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.901217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.901284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:30.517 [2024-12-09 23:23:10.901309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:41:30.517 [2024-12-09 23:23:10.901332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.917313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.917343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:30.517 [2024-12-09 23:23:10.917354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.911 ms 00:41:30.517 [2024-12-09 23:23:10.917364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:10.929592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:41:30.517 [2024-12-09 23:23:10.946681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:10.946714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:30.517 [2024-12-09 23:23:10.946729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.213 ms 00:41:30.517 [2024-12-09 23:23:10.946737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.022955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.023152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:41:30.517 [2024-12-09 23:23:11.023178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.180 ms 00:41:30.517 [2024-12-09 23:23:11.023188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.023376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.023388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:30.517 [2024-12-09 23:23:11.023401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:41:30.517 [2024-12-09 23:23:11.023409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.046367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.046491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:41:30.517 [2024-12-09 23:23:11.046511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.884 ms 00:41:30.517 [2024-12-09 23:23:11.046519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.069378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.069485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:41:30.517 [2024-12-09 23:23:11.069504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.819 ms 00:41:30.517 [2024-12-09 23:23:11.069511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.070121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.070138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:30.517 [2024-12-09 23:23:11.070150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:41:30.517 [2024-12-09 23:23:11.070157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.517 [2024-12-09 23:23:11.144293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.517 [2024-12-09 23:23:11.144409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:41:30.517 [2024-12-09 23:23:11.144431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.098 ms 00:41:30.517 [2024-12-09 23:23:11.144441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.169762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.776 [2024-12-09 23:23:11.169794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:41:30.776 [2024-12-09 23:23:11.169808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.244 ms 00:41:30.776 [2024-12-09 23:23:11.169816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.193598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.776 [2024-12-09 23:23:11.193631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:41:30.776 [2024-12-09 23:23:11.193644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:41:30.776 [2024-12-09 23:23:11.193652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.218485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.776 [2024-12-09 23:23:11.218516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:30.776 [2024-12-09 23:23:11.218528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.793 ms 00:41:30.776 [2024-12-09 23:23:11.218536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.218577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.776 [2024-12-09 23:23:11.218586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:30.776 [2024-12-09 23:23:11.218602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:30.776 [2024-12-09 23:23:11.218609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.218696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.776 [2024-12-09 23:23:11.218707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:30.776 [2024-12-09 23:23:11.218717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:41:30.776 [2024-12-09 23:23:11.218724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.776 [2024-12-09 23:23:11.219737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4410.196 ms, result 0 00:41:30.776 { 00:41:30.776 "name": "ftl0", 00:41:30.776 "uuid": "cd71ef5d-9f54-4a91-8b47-611fdcff16d0" 00:41:30.776 } 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:30.776 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:31.034 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:41:31.034 [ 00:41:31.034 { 00:41:31.034 "name": "ftl0", 00:41:31.034 "aliases": [ 00:41:31.034 "cd71ef5d-9f54-4a91-8b47-611fdcff16d0" 00:41:31.034 ], 00:41:31.034 "product_name": "FTL disk", 00:41:31.034 "block_size": 4096, 00:41:31.034 "num_blocks": 20971520, 00:41:31.034 "uuid": "cd71ef5d-9f54-4a91-8b47-611fdcff16d0", 00:41:31.034 "assigned_rate_limits": { 00:41:31.034 "rw_ios_per_sec": 0, 00:41:31.034 "rw_mbytes_per_sec": 0, 00:41:31.034 "r_mbytes_per_sec": 0, 00:41:31.034 "w_mbytes_per_sec": 0 00:41:31.034 }, 00:41:31.034 "claimed": false, 00:41:31.034 "zoned": false, 00:41:31.034 "supported_io_types": { 00:41:31.034 "read": true, 00:41:31.034 "write": true, 00:41:31.034 "unmap": true, 00:41:31.034 "flush": true, 00:41:31.034 "reset": false, 00:41:31.034 "nvme_admin": false, 00:41:31.034 "nvme_io": false, 00:41:31.034 "nvme_io_md": false, 00:41:31.034 "write_zeroes": true, 00:41:31.034 "zcopy": false, 00:41:31.034 "get_zone_info": false, 00:41:31.034 "zone_management": false, 00:41:31.034 "zone_append": false, 00:41:31.034 "compare": false, 00:41:31.034 "compare_and_write": false, 00:41:31.034 "abort": false, 00:41:31.034 "seek_hole": false, 00:41:31.034 "seek_data": false, 00:41:31.034 "copy": false, 00:41:31.034 "nvme_iov_md": false 00:41:31.034 }, 00:41:31.034 "driver_specific": { 00:41:31.034 "ftl": { 00:41:31.034 "base_bdev": "dc7a6f0d-c8f6-4c1f-ab9b-67e5f2d84467", 00:41:31.034 "cache": "nvc0n1p0" 00:41:31.034 } 00:41:31.034 } 00:41:31.034 } 00:41:31.034 ] 00:41:31.034 23:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:41:31.034 23:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:41:31.034 23:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:41:31.292 23:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:41:31.292 23:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:41:31.551 [2024-12-09 23:23:12.036321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.551 [2024-12-09 23:23:12.036377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:31.551 [2024-12-09 23:23:12.036391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:31.551 [2024-12-09 23:23:12.036402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.551 [2024-12-09 23:23:12.036434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:31.551 [2024-12-09 23:23:12.039242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.551 [2024-12-09 23:23:12.039272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:31.551 [2024-12-09 23:23:12.039288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:41:31.551 [2024-12-09 23:23:12.039296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.551 [2024-12-09 23:23:12.039670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.551 [2024-12-09 23:23:12.039681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:31.551 [2024-12-09 23:23:12.039691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:41:31.551 [2024-12-09 23:23:12.039698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.551 [2024-12-09 23:23:12.043140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.551 [2024-12-09 23:23:12.043163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:31.551 [2024-12-09 23:23:12.043174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:41:31.551 [2024-12-09 23:23:12.043183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.551 [2024-12-09 23:23:12.049290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.551 [2024-12-09 23:23:12.049313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:31.552 [2024-12-09 23:23:12.049324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.084 ms 00:41:31.552 [2024-12-09 23:23:12.049332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.074068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.074219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:31.552 [2024-12-09 23:23:12.074252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:41:31.552 [2024-12-09 23:23:12.074260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.090308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.090351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:31.552 [2024-12-09 23:23:12.090368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.005 ms 00:41:31.552 [2024-12-09 23:23:12.090376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.090555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.090566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:31.552 [2024-12-09 23:23:12.090577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:41:31.552 [2024-12-09 23:23:12.090584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.113250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.113367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:31.552 [2024-12-09 23:23:12.113386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.640 ms 00:41:31.552 [2024-12-09 23:23:12.113394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.136840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.136944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:31.552 [2024-12-09 23:23:12.136962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:41:31.552 [2024-12-09 23:23:12.136970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.160005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.160033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:31.552 [2024-12-09 23:23:12.160044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:41:31.552 [2024-12-09 23:23:12.160052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.182403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.552 [2024-12-09 23:23:12.182430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:31.552 [2024-12-09 23:23:12.182443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.267 ms 00:41:31.552 [2024-12-09 23:23:12.182450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.552 [2024-12-09 23:23:12.182492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:31.552 [2024-12-09 23:23:12.182506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.182978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:31.552 [2024-12-09 23:23:12.183120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:31.553 [2024-12-09 23:23:12.183451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:31.553 [2024-12-09 23:23:12.183461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd71ef5d-9f54-4a91-8b47-611fdcff16d0 00:41:31.553 [2024-12-09 23:23:12.183468] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:31.553 [2024-12-09 23:23:12.183479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:31.553 [2024-12-09 23:23:12.183486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:31.553 [2024-12-09 23:23:12.183497] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:31.553 [2024-12-09 23:23:12.183505] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:31.553 [2024-12-09 23:23:12.183514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:31.553 [2024-12-09 23:23:12.183522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:31.553 [2024-12-09 23:23:12.183530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:31.553 [2024-12-09 23:23:12.183536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:31.553 [2024-12-09 23:23:12.183544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.553 [2024-12-09 23:23:12.183552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:31.553 [2024-12-09 23:23:12.183562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:41:31.553 [2024-12-09 23:23:12.183569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.196611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.812 [2024-12-09 23:23:12.196642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:31.812 [2024-12-09 23:23:12.196654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.002 ms 00:41:31.812 [2024-12-09 23:23:12.196663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.197050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.812 [2024-12-09 23:23:12.197061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:31.812 [2024-12-09 23:23:12.197073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:41:31.812 [2024-12-09 23:23:12.197080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.242643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.242786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:31.812 [2024-12-09 23:23:12.242806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.242814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.242879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.242887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:31.812 [2024-12-09 23:23:12.242897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.242904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.243020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.243034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:31.812 [2024-12-09 23:23:12.243045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.243052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.243083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.243091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:31.812 [2024-12-09 23:23:12.243101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.243109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.326910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.326957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:31.812 [2024-12-09 23:23:12.326969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.326977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.392820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.392866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:31.812 [2024-12-09 23:23:12.392879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.392888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.392972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:31.812 [2024-12-09 23:23:12.393014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:31.812 [2024-12-09 23:23:12.393122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:31.812 [2024-12-09 23:23:12.393277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:31.812 [2024-12-09 23:23:12.393359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:31.812 [2024-12-09 23:23:12.393432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.812 [2024-12-09 23:23:12.393505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:31.812 [2024-12-09 23:23:12.393514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.812 [2024-12-09 23:23:12.393521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.812 [2024-12-09 23:23:12.393695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.335 ms, result 0 00:41:31.812 true 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75221 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75221 ']' 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75221 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75221 00:41:31.812 killing process with pid 75221 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75221' 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75221 00:41:31.812 23:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75221 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:34.340 23:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:41:34.600 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:41:34.600 fio-3.35 00:41:34.600 Starting 1 thread 00:41:41.192 00:41:41.192 test: (groupid=0, jobs=1): err= 0: pid=75422: Mon Dec 9 23:23:21 2024 00:41:41.192 read: IOPS=790, BW=52.5MiB/s (55.1MB/s)(255MiB/4847msec) 00:41:41.192 slat (nsec): min=3009, max=20280, avg=3976.00, stdev=1763.80 00:41:41.192 clat (usec): min=257, max=1389, avg=574.05, stdev=186.56 00:41:41.192 lat (usec): min=260, max=1405, avg=578.03, stdev=186.64 00:41:41.192 clat percentiles (usec): 00:41:41.192 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 433], 00:41:41.192 | 30.00th=[ 465], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 553], 00:41:41.192 | 70.00th=[ 603], 80.00th=[ 709], 90.00th=[ 914], 95.00th=[ 947], 00:41:41.192 | 99.00th=[ 1037], 99.50th=[ 1090], 99.90th=[ 1254], 99.95th=[ 1369], 00:41:41.192 | 99.99th=[ 1385] 00:41:41.192 write: IOPS=796, BW=52.9MiB/s (55.5MB/s)(256MiB/4842msec); 0 zone resets 00:41:41.192 slat (nsec): min=13567, max=88980, avg=18039.50, stdev=3967.99 00:41:41.192 clat (usec): min=315, max=2902, avg=653.27, stdev=226.32 00:41:41.192 lat (usec): min=331, max=2938, avg=671.31, stdev=226.49 00:41:41.192 clat percentiles (usec): 00:41:41.192 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 416], 20.00th=[ 486], 00:41:41.192 | 30.00th=[ 515], 40.00th=[ 570], 50.00th=[ 619], 60.00th=[ 627], 00:41:41.192 | 70.00th=[ 685], 80.00th=[ 824], 90.00th=[ 963], 95.00th=[ 1029], 00:41:41.192 | 99.00th=[ 1352], 99.50th=[ 1762], 99.90th=[ 2040], 99.95th=[ 2114], 00:41:41.192 | 99.99th=[ 2900] 00:41:41.192 bw ( KiB/s): min=38954, max=67728, per=99.52%, avg=53892.67, stdev=11306.59, samples=9 00:41:41.192 iops : min= 572, max= 996, avg=792.44, stdev=166.41, samples=9 00:41:41.192 lat (usec) : 500=33.26%, 750=46.63%, 1000=15.00% 00:41:41.192 lat (msec) : 2=5.07%, 4=0.05% 00:41:41.192 cpu : usr=99.34%, sys=0.04%, ctx=13, majf=0, minf=1169 00:41:41.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.192 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:41.192 00:41:41.192 Run status group 0 (all jobs): 00:41:41.192 READ: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=255MiB (267MB), run=4847-4847msec 00:41:41.192 WRITE: bw=52.9MiB/s (55.5MB/s), 52.9MiB/s-52.9MiB/s (55.5MB/s-55.5MB/s), io=256MiB (269MB), run=4842-4842msec 00:41:42.132 ----------------------------------------------------- 00:41:42.132 Suppressions used: 00:41:42.132 count bytes template 00:41:42.132 1 5 /usr/src/fio/parse.c 00:41:42.132 1 8 libtcmalloc_minimal.so 00:41:42.132 1 904 libcrypto.so 00:41:42.132 ----------------------------------------------------- 00:41:42.132 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:42.132 23:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:41:42.132 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:41:42.132 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:41:42.132 fio-3.35 00:41:42.132 Starting 2 threads 00:42:14.252 00:42:14.252 first_half: (groupid=0, jobs=1): err= 0: pid=75536: Mon Dec 9 23:23:54 2024 00:42:14.252 read: IOPS=2159, BW=8640KiB/s (8847kB/s)(255MiB/30207msec) 00:42:14.252 slat (nsec): min=3174, max=35221, avg=5597.05, stdev=1931.02 00:42:14.252 clat (usec): min=832, max=516685, avg=40195.23, stdev=30465.65 00:42:14.252 lat (usec): min=838, max=516694, avg=40200.82, stdev=30465.78 00:42:14.252 clat percentiles (msec): 00:42:14.252 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:42:14.252 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 35], 60.00th=[ 37], 00:42:14.252 | 70.00th=[ 39], 80.00th=[ 42], 90.00th=[ 48], 95.00th=[ 61], 00:42:14.252 | 99.00th=[ 192], 99.50th=[ 224], 99.90th=[ 414], 99.95th=[ 456], 00:42:14.252 | 99.99th=[ 506] 00:42:14.252 write: IOPS=2458, BW=9832KiB/s (10.1MB/s)(256MiB/26661msec); 0 zone resets 00:42:14.252 slat (usec): min=3, max=2779, avg= 8.69, stdev=23.90 00:42:14.252 clat (usec): min=370, max=164899, avg=18916.73, stdev=33534.05 00:42:14.252 lat (usec): min=379, max=164910, avg=18925.41, stdev=33534.65 00:42:14.252 clat percentiles (usec): 00:42:14.252 | 1.00th=[ 1123], 5.00th=[ 1729], 10.00th=[ 2114], 20.00th=[ 2933], 00:42:14.252 | 30.00th=[ 4113], 40.00th=[ 5932], 50.00th=[ 7308], 60.00th=[ 8717], 00:42:14.252 | 70.00th=[ 12256], 80.00th=[ 17695], 90.00th=[ 58459], 95.00th=[123208], 00:42:14.252 | 99.00th=[143655], 99.50th=[147850], 99.90th=[154141], 99.95th=[156238], 00:42:14.252 | 99.99th=[160433] 00:42:14.252 bw ( KiB/s): min= 952, max=42792, per=88.86%, avg=17474.53, stdev=11413.96, samples=30 00:42:14.252 iops : min= 238, max=10698, avg=4368.63, stdev=2853.49, samples=30 00:42:14.252 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.23% 00:42:14.252 lat (msec) : 2=3.99%, 4=10.55%, 10=19.46%, 20=9.95%, 50=45.98% 00:42:14.252 lat (msec) : 100=4.87%, 250=4.73%, 500=0.18%, 750=0.01% 00:42:14.252 cpu : usr=98.95%, sys=0.20%, ctx=55, majf=0, minf=5565 00:42:14.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:42:14.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.252 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.252 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.252 second_half: (groupid=0, jobs=1): err= 0: pid=75537: Mon Dec 9 23:23:54 2024 00:42:14.252 read: IOPS=2167, BW=8668KiB/s (8876kB/s)(255MiB/30079msec) 00:42:14.252 slat (nsec): min=3170, max=34162, avg=5609.19, stdev=1924.07 00:42:14.252 clat (usec): min=776, max=527771, avg=41438.65, stdev=28034.96 00:42:14.252 lat (usec): min=782, max=527787, avg=41444.26, stdev=28035.04 00:42:14.252 clat percentiles (msec): 00:42:14.252 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:42:14.252 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 36], 60.00th=[ 37], 00:42:14.252 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 52], 95.00th=[ 74], 00:42:14.252 | 99.00th=[ 194], 99.50th=[ 218], 99.90th=[ 292], 99.95th=[ 313], 00:42:14.252 | 99.99th=[ 518] 00:42:14.252 write: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(256MiB/24785msec); 0 zone resets 00:42:14.252 slat (usec): min=4, max=3368, avg= 8.72, stdev=23.10 00:42:14.252 clat (usec): min=370, max=166919, avg=17509.04, stdev=32638.23 00:42:14.252 lat (usec): min=380, max=166929, avg=17517.76, stdev=32638.75 00:42:14.252 clat percentiles (usec): 00:42:14.252 | 1.00th=[ 1074], 5.00th=[ 1598], 10.00th=[ 1926], 20.00th=[ 2474], 00:42:14.252 | 30.00th=[ 3228], 40.00th=[ 4228], 50.00th=[ 5800], 60.00th=[ 7504], 00:42:14.252 | 70.00th=[ 11863], 80.00th=[ 18220], 90.00th=[ 28705], 95.00th=[120062], 00:42:14.252 | 99.00th=[143655], 99.50th=[145753], 99.90th=[154141], 99.95th=[158335], 00:42:14.252 | 99.99th=[162530] 00:42:14.252 bw ( KiB/s): min= 880, max=37032, per=88.87%, avg=17476.27, stdev=9656.83, samples=30 00:42:14.252 iops : min= 220, max= 9258, avg=4369.07, stdev=2414.21, samples=30 00:42:14.252 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.28% 00:42:14.252 lat (msec) : 2=5.37%, 4=13.41%, 10=15.12%, 20=8.90%, 50=47.08% 00:42:14.252 lat (msec) : 100=4.80%, 250=4.83%, 500=0.14%, 750=0.01% 00:42:14.252 cpu : usr=99.27%, sys=0.16%, ctx=70, majf=0, minf=5560 00:42:14.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:42:14.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.253 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.253 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.253 00:42:14.253 Run status group 0 (all jobs): 00:42:14.253 READ: bw=16.9MiB/s (17.7MB/s), 8640KiB/s-8668KiB/s (8847kB/s-8876kB/s), io=509MiB (534MB), run=30079-30207msec 00:42:14.253 WRITE: bw=19.2MiB/s (20.1MB/s), 9832KiB/s-10.3MiB/s (10.1MB/s-10.8MB/s), io=512MiB (537MB), run=24785-26661msec 00:42:15.627 ----------------------------------------------------- 00:42:15.627 Suppressions used: 00:42:15.627 count bytes template 00:42:15.627 2 10 /usr/src/fio/parse.c 00:42:15.627 3 288 /usr/src/fio/iolog.c 00:42:15.627 1 8 libtcmalloc_minimal.so 00:42:15.627 1 904 libcrypto.so 00:42:15.627 ----------------------------------------------------- 00:42:15.627 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:15.627 23:23:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:42:15.888 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:42:15.888 fio-3.35 00:42:15.888 Starting 1 thread 00:42:34.022 00:42:34.022 test: (groupid=0, jobs=1): err= 0: pid=75916: Mon Dec 9 23:24:11 2024 00:42:34.022 read: IOPS=8117, BW=31.7MiB/s (33.2MB/s)(255MiB/8032msec) 00:42:34.022 slat (nsec): min=3122, max=86452, avg=3731.42, stdev=929.94 00:42:34.022 clat (usec): min=524, max=30769, avg=15760.00, stdev=1646.41 00:42:34.022 lat (usec): min=533, max=30773, avg=15763.73, stdev=1646.49 00:42:34.022 clat percentiles (usec): 00:42:34.022 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14746], 20.00th=[15008], 00:42:34.022 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:42:34.022 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16909], 95.00th=[19268], 00:42:34.022 | 99.00th=[23200], 99.50th=[24249], 99.90th=[28443], 99.95th=[29754], 00:42:34.022 | 99.99th=[30540] 00:42:34.022 write: IOPS=10.6k, BW=41.3MiB/s (43.4MB/s)(256MiB/6192msec); 0 zone resets 00:42:34.022 slat (usec): min=4, max=532, avg= 7.94, stdev= 4.47 00:42:34.022 clat (usec): min=486, max=62868, avg=12035.90, stdev=12541.59 00:42:34.022 lat (usec): min=491, max=62875, avg=12043.85, stdev=12541.73 00:42:34.022 clat percentiles (usec): 00:42:34.022 | 1.00th=[ 627], 5.00th=[ 824], 10.00th=[ 938], 20.00th=[ 1205], 00:42:34.022 | 30.00th=[ 1582], 40.00th=[ 2802], 50.00th=[ 9896], 60.00th=[12780], 00:42:34.022 | 70.00th=[15401], 80.00th=[17695], 90.00th=[31851], 95.00th=[40109], 00:42:34.022 | 99.00th=[50070], 99.50th=[53740], 99.90th=[58459], 99.95th=[60031], 00:42:34.022 | 99.99th=[62129] 00:42:34.022 bw ( KiB/s): min=17048, max=62072, per=95.26%, avg=40329.85, stdev=10781.93, samples=13 00:42:34.022 iops : min= 4262, max=15518, avg=10082.46, stdev=2695.48, samples=13 00:42:34.022 lat (usec) : 500=0.01%, 750=1.51%, 1000=4.83% 00:42:34.022 lat (msec) : 2=11.70%, 4=2.58%, 10=4.63%, 20=64.33%, 50=9.89% 00:42:34.022 lat (msec) : 100=0.52% 00:42:34.022 cpu : usr=98.98%, sys=0.17%, ctx=39, majf=0, minf=5565 00:42:34.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:42:34.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.022 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:34.023 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:34.023 00:42:34.023 Run status group 0 (all jobs): 00:42:34.023 READ: bw=31.7MiB/s (33.2MB/s), 31.7MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8032-8032msec 00:42:34.023 WRITE: bw=41.3MiB/s (43.4MB/s), 41.3MiB/s-41.3MiB/s (43.4MB/s-43.4MB/s), io=256MiB (268MB), run=6192-6192msec 00:42:34.023 ----------------------------------------------------- 00:42:34.023 Suppressions used: 00:42:34.023 count bytes template 00:42:34.023 1 5 /usr/src/fio/parse.c 00:42:34.023 2 192 /usr/src/fio/iolog.c 00:42:34.023 1 8 libtcmalloc_minimal.so 00:42:34.023 1 904 libcrypto.so 00:42:34.023 ----------------------------------------------------- 00:42:34.023 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:34.023 Remove shared memory files 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57138 /dev/shm/spdk_tgt_trace.pid74138 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:42:34.023 ************************************ 00:42:34.023 END TEST ftl_fio_basic 00:42:34.023 ************************************ 00:42:34.023 00:42:34.023 real 1m10.541s 00:42:34.023 user 2m35.700s 00:42:34.023 sys 0m3.154s 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.023 23:24:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:34.023 23:24:13 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:42:34.023 23:24:13 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:34.023 23:24:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.023 23:24:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:34.023 ************************************ 00:42:34.023 START TEST ftl_bdevperf 00:42:34.023 ************************************ 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:42:34.023 * Looking for test storage... 00:42:34.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.023 --rc genhtml_branch_coverage=1 00:42:34.023 --rc genhtml_function_coverage=1 00:42:34.023 --rc genhtml_legend=1 00:42:34.023 --rc geninfo_all_blocks=1 00:42:34.023 --rc geninfo_unexecuted_blocks=1 00:42:34.023 00:42:34.023 ' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.023 --rc genhtml_branch_coverage=1 00:42:34.023 --rc genhtml_function_coverage=1 00:42:34.023 --rc genhtml_legend=1 00:42:34.023 --rc geninfo_all_blocks=1 00:42:34.023 --rc geninfo_unexecuted_blocks=1 00:42:34.023 00:42:34.023 ' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.023 --rc genhtml_branch_coverage=1 00:42:34.023 --rc genhtml_function_coverage=1 00:42:34.023 --rc genhtml_legend=1 00:42:34.023 --rc geninfo_all_blocks=1 00:42:34.023 --rc geninfo_unexecuted_blocks=1 00:42:34.023 00:42:34.023 ' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.023 --rc genhtml_branch_coverage=1 00:42:34.023 --rc genhtml_function_coverage=1 00:42:34.023 --rc genhtml_legend=1 00:42:34.023 --rc geninfo_all_blocks=1 00:42:34.023 --rc geninfo_unexecuted_blocks=1 00:42:34.023 00:42:34.023 ' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:34.023 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76164 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76164 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76164 ']' 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:34.024 23:24:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:42:34.024 [2024-12-09 23:24:13.963859] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:34.024 [2024-12-09 23:24:13.964244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76164 ] 00:42:34.024 [2024-12-09 23:24:14.129727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.024 [2024-12-09 23:24:14.262586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.284 23:24:14 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:34.284 23:24:14 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:42:34.285 23:24:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:42:34.546 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:34.805 { 00:42:34.805 "name": "nvme0n1", 00:42:34.805 "aliases": [ 00:42:34.805 "c7460fbb-c673-4c94-a27b-f4fd995f1842" 00:42:34.805 ], 00:42:34.805 "product_name": "NVMe disk", 00:42:34.805 "block_size": 4096, 00:42:34.805 "num_blocks": 1310720, 00:42:34.805 "uuid": "c7460fbb-c673-4c94-a27b-f4fd995f1842", 00:42:34.805 "numa_id": -1, 00:42:34.805 "assigned_rate_limits": { 00:42:34.805 "rw_ios_per_sec": 0, 00:42:34.805 "rw_mbytes_per_sec": 0, 00:42:34.805 "r_mbytes_per_sec": 0, 00:42:34.805 "w_mbytes_per_sec": 0 00:42:34.805 }, 00:42:34.805 "claimed": true, 00:42:34.805 "claim_type": "read_many_write_one", 00:42:34.805 "zoned": false, 00:42:34.805 "supported_io_types": { 00:42:34.805 "read": true, 00:42:34.805 "write": true, 00:42:34.805 "unmap": true, 00:42:34.805 "flush": true, 00:42:34.805 "reset": true, 00:42:34.805 "nvme_admin": true, 00:42:34.805 "nvme_io": true, 00:42:34.805 "nvme_io_md": false, 00:42:34.805 "write_zeroes": true, 00:42:34.805 "zcopy": false, 00:42:34.805 "get_zone_info": false, 00:42:34.805 "zone_management": false, 00:42:34.805 "zone_append": false, 00:42:34.805 "compare": true, 00:42:34.805 "compare_and_write": false, 00:42:34.805 "abort": true, 00:42:34.805 "seek_hole": false, 00:42:34.805 "seek_data": false, 00:42:34.805 "copy": true, 00:42:34.805 "nvme_iov_md": false 00:42:34.805 }, 00:42:34.805 "driver_specific": { 00:42:34.805 "nvme": [ 00:42:34.805 { 00:42:34.805 "pci_address": "0000:00:11.0", 00:42:34.805 "trid": { 00:42:34.805 "trtype": "PCIe", 00:42:34.805 "traddr": "0000:00:11.0" 00:42:34.805 }, 00:42:34.805 "ctrlr_data": { 00:42:34.805 "cntlid": 0, 00:42:34.805 "vendor_id": "0x1b36", 00:42:34.805 "model_number": "QEMU NVMe Ctrl", 00:42:34.805 "serial_number": "12341", 00:42:34.805 "firmware_revision": "8.0.0", 00:42:34.805 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:34.805 "oacs": { 00:42:34.805 "security": 0, 00:42:34.805 "format": 1, 00:42:34.805 "firmware": 0, 00:42:34.805 "ns_manage": 1 00:42:34.805 }, 00:42:34.805 "multi_ctrlr": false, 00:42:34.805 "ana_reporting": false 00:42:34.805 }, 00:42:34.805 "vs": { 00:42:34.805 "nvme_version": "1.4" 00:42:34.805 }, 00:42:34.805 "ns_data": { 00:42:34.805 "id": 1, 00:42:34.805 "can_share": false 00:42:34.805 } 00:42:34.805 } 00:42:34.805 ], 00:42:34.805 "mp_policy": "active_passive" 00:42:34.805 } 00:42:34.805 } 00:42:34.805 ]' 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:34.805 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:35.064 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7147723f-6860-4622-b017-057f07f39980 00:42:35.064 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:42:35.064 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7147723f-6860-4622-b017-057f07f39980 00:42:35.323 23:24:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:35.584 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=4a27a4b7-fa6f-49f4-b336-1f9a5c487d24 00:42:35.584 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4a27a4b7-fa6f-49f4-b336-1f9a5c487d24 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:42:35.844 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:36.105 { 00:42:36.105 "name": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:36.105 "aliases": [ 00:42:36.105 "lvs/nvme0n1p0" 00:42:36.105 ], 00:42:36.105 "product_name": "Logical Volume", 00:42:36.105 "block_size": 4096, 00:42:36.105 "num_blocks": 26476544, 00:42:36.105 "uuid": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:36.105 "assigned_rate_limits": { 00:42:36.105 "rw_ios_per_sec": 0, 00:42:36.105 "rw_mbytes_per_sec": 0, 00:42:36.105 "r_mbytes_per_sec": 0, 00:42:36.105 "w_mbytes_per_sec": 0 00:42:36.105 }, 00:42:36.105 "claimed": false, 00:42:36.105 "zoned": false, 00:42:36.105 "supported_io_types": { 00:42:36.105 "read": true, 00:42:36.105 "write": true, 00:42:36.105 "unmap": true, 00:42:36.105 "flush": false, 00:42:36.105 "reset": true, 00:42:36.105 "nvme_admin": false, 00:42:36.105 "nvme_io": false, 00:42:36.105 "nvme_io_md": false, 00:42:36.105 "write_zeroes": true, 00:42:36.105 "zcopy": false, 00:42:36.105 "get_zone_info": false, 00:42:36.105 "zone_management": false, 00:42:36.105 "zone_append": false, 00:42:36.105 "compare": false, 00:42:36.105 "compare_and_write": false, 00:42:36.105 "abort": false, 00:42:36.105 "seek_hole": true, 00:42:36.105 "seek_data": true, 00:42:36.105 "copy": false, 00:42:36.105 "nvme_iov_md": false 00:42:36.105 }, 00:42:36.105 "driver_specific": { 00:42:36.105 "lvol": { 00:42:36.105 "lvol_store_uuid": "4a27a4b7-fa6f-49f4-b336-1f9a5c487d24", 00:42:36.105 "base_bdev": "nvme0n1", 00:42:36.105 "thin_provision": true, 00:42:36.105 "num_allocated_clusters": 0, 00:42:36.105 "snapshot": false, 00:42:36.105 "clone": false, 00:42:36.105 "esnap_clone": false 00:42:36.105 } 00:42:36.105 } 00:42:36.105 } 00:42:36.105 ]' 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:42:36.105 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:36.365 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:36.365 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:36.365 23:24:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.365 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.365 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:36.366 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:42:36.366 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:42:36.366 23:24:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:36.626 { 00:42:36.626 "name": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:36.626 "aliases": [ 00:42:36.626 "lvs/nvme0n1p0" 00:42:36.626 ], 00:42:36.626 "product_name": "Logical Volume", 00:42:36.626 "block_size": 4096, 00:42:36.626 "num_blocks": 26476544, 00:42:36.626 "uuid": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:36.626 "assigned_rate_limits": { 00:42:36.626 "rw_ios_per_sec": 0, 00:42:36.626 "rw_mbytes_per_sec": 0, 00:42:36.626 "r_mbytes_per_sec": 0, 00:42:36.626 "w_mbytes_per_sec": 0 00:42:36.626 }, 00:42:36.626 "claimed": false, 00:42:36.626 "zoned": false, 00:42:36.626 "supported_io_types": { 00:42:36.626 "read": true, 00:42:36.626 "write": true, 00:42:36.626 "unmap": true, 00:42:36.626 "flush": false, 00:42:36.626 "reset": true, 00:42:36.626 "nvme_admin": false, 00:42:36.626 "nvme_io": false, 00:42:36.626 "nvme_io_md": false, 00:42:36.626 "write_zeroes": true, 00:42:36.626 "zcopy": false, 00:42:36.626 "get_zone_info": false, 00:42:36.626 "zone_management": false, 00:42:36.626 "zone_append": false, 00:42:36.626 "compare": false, 00:42:36.626 "compare_and_write": false, 00:42:36.626 "abort": false, 00:42:36.626 "seek_hole": true, 00:42:36.626 "seek_data": true, 00:42:36.626 "copy": false, 00:42:36.626 "nvme_iov_md": false 00:42:36.626 }, 00:42:36.626 "driver_specific": { 00:42:36.626 "lvol": { 00:42:36.626 "lvol_store_uuid": "4a27a4b7-fa6f-49f4-b336-1f9a5c487d24", 00:42:36.626 "base_bdev": "nvme0n1", 00:42:36.626 "thin_provision": true, 00:42:36.626 "num_allocated_clusters": 0, 00:42:36.626 "snapshot": false, 00:42:36.626 "clone": false, 00:42:36.626 "esnap_clone": false 00:42:36.626 } 00:42:36.626 } 00:42:36.626 } 00:42:36.626 ]' 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:42:36.626 23:24:17 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:42:36.887 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1dea449-f814-46fc-95a5-7e82e3a68aec 00:42:37.148 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:37.148 { 00:42:37.148 "name": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:37.149 "aliases": [ 00:42:37.149 "lvs/nvme0n1p0" 00:42:37.149 ], 00:42:37.149 "product_name": "Logical Volume", 00:42:37.149 "block_size": 4096, 00:42:37.149 "num_blocks": 26476544, 00:42:37.149 "uuid": "d1dea449-f814-46fc-95a5-7e82e3a68aec", 00:42:37.149 "assigned_rate_limits": { 00:42:37.149 "rw_ios_per_sec": 0, 00:42:37.149 "rw_mbytes_per_sec": 0, 00:42:37.149 "r_mbytes_per_sec": 0, 00:42:37.149 "w_mbytes_per_sec": 0 00:42:37.149 }, 00:42:37.149 "claimed": false, 00:42:37.149 "zoned": false, 00:42:37.149 "supported_io_types": { 00:42:37.149 "read": true, 00:42:37.149 "write": true, 00:42:37.149 "unmap": true, 00:42:37.149 "flush": false, 00:42:37.149 "reset": true, 00:42:37.149 "nvme_admin": false, 00:42:37.149 "nvme_io": false, 00:42:37.149 "nvme_io_md": false, 00:42:37.149 "write_zeroes": true, 00:42:37.149 "zcopy": false, 00:42:37.149 "get_zone_info": false, 00:42:37.149 "zone_management": false, 00:42:37.149 "zone_append": false, 00:42:37.149 "compare": false, 00:42:37.149 "compare_and_write": false, 00:42:37.149 "abort": false, 00:42:37.149 "seek_hole": true, 00:42:37.149 "seek_data": true, 00:42:37.149 "copy": false, 00:42:37.149 "nvme_iov_md": false 00:42:37.149 }, 00:42:37.149 "driver_specific": { 00:42:37.149 "lvol": { 00:42:37.149 "lvol_store_uuid": "4a27a4b7-fa6f-49f4-b336-1f9a5c487d24", 00:42:37.149 "base_bdev": "nvme0n1", 00:42:37.149 "thin_provision": true, 00:42:37.149 "num_allocated_clusters": 0, 00:42:37.149 "snapshot": false, 00:42:37.149 "clone": false, 00:42:37.149 "esnap_clone": false 00:42:37.149 } 00:42:37.149 } 00:42:37.149 } 00:42:37.149 ]' 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:42:37.149 23:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d1dea449-f814-46fc-95a5-7e82e3a68aec -c nvc0n1p0 --l2p_dram_limit 20 00:42:37.411 [2024-12-09 23:24:17.853437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.411 [2024-12-09 23:24:17.853511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:37.411 [2024-12-09 23:24:17.853529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:37.411 [2024-12-09 23:24:17.853541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.411 [2024-12-09 23:24:17.853637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.411 [2024-12-09 23:24:17.853651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:37.411 [2024-12-09 23:24:17.853660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:42:37.411 [2024-12-09 23:24:17.853671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.411 [2024-12-09 23:24:17.853691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:37.411 [2024-12-09 23:24:17.854697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:37.411 [2024-12-09 23:24:17.854742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.411 [2024-12-09 23:24:17.854753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:37.411 [2024-12-09 23:24:17.854764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:42:37.411 [2024-12-09 23:24:17.854776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.411 [2024-12-09 23:24:17.854869] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ec86b994-753a-40a0-85b1-4b080e9c915c 00:42:37.411 [2024-12-09 23:24:17.856704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.411 [2024-12-09 23:24:17.856755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:37.411 [2024-12-09 23:24:17.856773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:42:37.412 [2024-12-09 23:24:17.856781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.866108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.866168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:37.412 [2024-12-09 23:24:17.866183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.277 ms 00:42:37.412 [2024-12-09 23:24:17.866193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.866300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.866310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:37.412 [2024-12-09 23:24:17.866325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:42:37.412 [2024-12-09 23:24:17.866334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.866398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.866408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:37.412 [2024-12-09 23:24:17.866419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:37.412 [2024-12-09 23:24:17.866426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.866452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:37.412 [2024-12-09 23:24:17.870939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.871004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:37.412 [2024-12-09 23:24:17.871015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.498 ms 00:42:37.412 [2024-12-09 23:24:17.871030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.871072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.871084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:37.412 [2024-12-09 23:24:17.871093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:42:37.412 [2024-12-09 23:24:17.871104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.871141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:37.412 [2024-12-09 23:24:17.871298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:37.412 [2024-12-09 23:24:17.871311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:37.412 [2024-12-09 23:24:17.871324] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:37.412 [2024-12-09 23:24:17.871336] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871347] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871356] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:37.412 [2024-12-09 23:24:17.871365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:37.412 [2024-12-09 23:24:17.871374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:37.412 [2024-12-09 23:24:17.871384] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:37.412 [2024-12-09 23:24:17.871395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.871410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:37.412 [2024-12-09 23:24:17.871418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:42:37.412 [2024-12-09 23:24:17.871427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.871511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.412 [2024-12-09 23:24:17.871529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:37.412 [2024-12-09 23:24:17.871537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:42:37.412 [2024-12-09 23:24:17.871549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.412 [2024-12-09 23:24:17.871640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:37.412 [2024-12-09 23:24:17.871655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:37.412 [2024-12-09 23:24:17.871663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:37.412 [2024-12-09 23:24:17.871691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:37.412 [2024-12-09 23:24:17.871713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:37.412 [2024-12-09 23:24:17.871729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:37.412 [2024-12-09 23:24:17.871747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:37.412 [2024-12-09 23:24:17.871754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:37.412 [2024-12-09 23:24:17.871763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:37.412 [2024-12-09 23:24:17.871769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:37.412 [2024-12-09 23:24:17.871789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:37.412 [2024-12-09 23:24:17.871805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:37.412 [2024-12-09 23:24:17.871829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:37.412 [2024-12-09 23:24:17.871854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:37.412 [2024-12-09 23:24:17.871876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:37.412 [2024-12-09 23:24:17.871900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:37.412 [2024-12-09 23:24:17.871919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:37.412 [2024-12-09 23:24:17.871925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:37.412 [2024-12-09 23:24:17.871935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:37.412 [2024-12-09 23:24:17.871941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:37.412 [2024-12-09 23:24:17.871952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:37.412 [2024-12-09 23:24:17.871958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:37.412 [2024-12-09 23:24:17.871967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:37.412 [2024-12-09 23:24:17.871973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:37.412 [2024-12-09 23:24:17.872000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.872008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:37.412 [2024-12-09 23:24:17.872017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:37.412 [2024-12-09 23:24:17.872024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.872032] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:37.412 [2024-12-09 23:24:17.872040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:37.412 [2024-12-09 23:24:17.872049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:37.412 [2024-12-09 23:24:17.872056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:37.412 [2024-12-09 23:24:17.872071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:37.412 [2024-12-09 23:24:17.872079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:37.412 [2024-12-09 23:24:17.872088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:37.412 [2024-12-09 23:24:17.872095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:37.412 [2024-12-09 23:24:17.872104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:37.412 [2024-12-09 23:24:17.872111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:37.412 [2024-12-09 23:24:17.872122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:37.412 [2024-12-09 23:24:17.872132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:37.412 [2024-12-09 23:24:17.872143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:37.412 [2024-12-09 23:24:17.872151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:37.412 [2024-12-09 23:24:17.872160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:37.412 [2024-12-09 23:24:17.872167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:37.412 [2024-12-09 23:24:17.872176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:37.412 [2024-12-09 23:24:17.872183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:37.412 [2024-12-09 23:24:17.872192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:37.412 [2024-12-09 23:24:17.872199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:37.412 [2024-12-09 23:24:17.872212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:37.413 [2024-12-09 23:24:17.872221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:37.413 [2024-12-09 23:24:17.872263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:37.413 [2024-12-09 23:24:17.872271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:37.413 [2024-12-09 23:24:17.872291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:37.413 [2024-12-09 23:24:17.872300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:37.413 [2024-12-09 23:24:17.872307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:37.413 [2024-12-09 23:24:17.872317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:37.413 [2024-12-09 23:24:17.872325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:37.413 [2024-12-09 23:24:17.872335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:42:37.413 [2024-12-09 23:24:17.872343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:37.413 [2024-12-09 23:24:17.872386] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:37.413 [2024-12-09 23:24:17.872397] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:42:40.705 [2024-12-09 23:24:21.045964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.046201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:40.705 [2024-12-09 23:24:21.046269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3173.559 ms 00:42:40.705 [2024-12-09 23:24:21.046295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.073373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.073530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:40.705 [2024-12-09 23:24:21.073594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.851 ms 00:42:40.705 [2024-12-09 23:24:21.073626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.073763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.073789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:40.705 [2024-12-09 23:24:21.073858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:42:40.705 [2024-12-09 23:24:21.073881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.118565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.118718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:40.705 [2024-12-09 23:24:21.118785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.629 ms 00:42:40.705 [2024-12-09 23:24:21.118810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.118862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.118885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:40.705 [2024-12-09 23:24:21.118907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:40.705 [2024-12-09 23:24:21.118928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.119369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.119458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:40.705 [2024-12-09 23:24:21.119514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:42:40.705 [2024-12-09 23:24:21.119537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.705 [2024-12-09 23:24:21.119657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.705 [2024-12-09 23:24:21.119746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:40.705 [2024-12-09 23:24:21.120201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:42:40.705 [2024-12-09 23:24:21.120225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.133569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.133609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:40.706 [2024-12-09 23:24:21.133621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.308 ms 00:42:40.706 [2024-12-09 23:24:21.133636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.145403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:42:40.706 [2024-12-09 23:24:21.150877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.150912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:40.706 [2024-12-09 23:24:21.150924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.170 ms 00:42:40.706 [2024-12-09 23:24:21.150934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.228023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.228074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:40.706 [2024-12-09 23:24:21.228087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.064 ms 00:42:40.706 [2024-12-09 23:24:21.228097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.228274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.228289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:40.706 [2024-12-09 23:24:21.228299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:42:40.706 [2024-12-09 23:24:21.228311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.252607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.252647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:40.706 [2024-12-09 23:24:21.252660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.242 ms 00:42:40.706 [2024-12-09 23:24:21.252671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.275591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.275629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:40.706 [2024-12-09 23:24:21.275640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.886 ms 00:42:40.706 [2024-12-09 23:24:21.275649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.706 [2024-12-09 23:24:21.276244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.706 [2024-12-09 23:24:21.276261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:40.706 [2024-12-09 23:24:21.276270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:42:40.706 [2024-12-09 23:24:21.276280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.354170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.354339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:40.967 [2024-12-09 23:24:21.354358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.859 ms 00:42:40.967 [2024-12-09 23:24:21.354368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.380788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.380829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:40.967 [2024-12-09 23:24:21.380844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.351 ms 00:42:40.967 [2024-12-09 23:24:21.380855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.405675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.405729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:42:40.967 [2024-12-09 23:24:21.405740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.781 ms 00:42:40.967 [2024-12-09 23:24:21.405750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.431526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.431572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:40.967 [2024-12-09 23:24:21.431584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.738 ms 00:42:40.967 [2024-12-09 23:24:21.431594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.431639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.431655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:40.967 [2024-12-09 23:24:21.431664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:40.967 [2024-12-09 23:24:21.431674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.431758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:40.967 [2024-12-09 23:24:21.431771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:40.967 [2024-12-09 23:24:21.431780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:42:40.967 [2024-12-09 23:24:21.431789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:40.967 [2024-12-09 23:24:21.432925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3579.045 ms, result 0 00:42:40.967 { 00:42:40.967 "name": "ftl0", 00:42:40.967 "uuid": "ec86b994-753a-40a0-85b1-4b080e9c915c" 00:42:40.967 } 00:42:40.967 23:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:42:40.967 23:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:42:40.967 23:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:42:41.228 23:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:42:41.228 [2024-12-09 23:24:21.773220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:42:41.228 I/O size of 69632 is greater than zero copy threshold (65536). 00:42:41.228 Zero copy mechanism will not be used. 00:42:41.228 Running I/O for 4 seconds... 00:42:43.562 952.00 IOPS, 63.22 MiB/s [2024-12-09T23:24:25.148Z] 1026.00 IOPS, 68.13 MiB/s [2024-12-09T23:24:26.086Z] 1001.67 IOPS, 66.52 MiB/s [2024-12-09T23:24:26.086Z] 1052.00 IOPS, 69.86 MiB/s 00:42:45.450 Latency(us) 00:42:45.450 [2024-12-09T23:24:26.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:45.450 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:42:45.450 ftl0 : 4.00 1051.79 69.85 0.00 0.00 1003.26 228.43 3024.74 00:42:45.450 [2024-12-09T23:24:26.086Z] =================================================================================================================== 00:42:45.450 [2024-12-09T23:24:26.086Z] Total : 1051.79 69.85 0.00 0.00 1003.26 228.43 3024.74 00:42:45.450 { 00:42:45.450 "results": [ 00:42:45.450 { 00:42:45.450 "job": "ftl0", 00:42:45.450 "core_mask": "0x1", 00:42:45.450 "workload": "randwrite", 00:42:45.450 "status": "finished", 00:42:45.450 "queue_depth": 1, 00:42:45.450 "io_size": 69632, 00:42:45.450 "runtime": 4.001768, 00:42:45.450 "iops": 1051.7851109809465, 00:42:45.450 "mibps": 69.84510502607847, 00:42:45.450 "io_failed": 0, 00:42:45.450 "io_timeout": 0, 00:42:45.450 "avg_latency_us": 1003.2641731089059, 00:42:45.450 "min_latency_us": 228.43076923076924, 00:42:45.450 "max_latency_us": 3024.7384615384617 00:42:45.450 } 00:42:45.450 ], 00:42:45.450 "core_count": 1 00:42:45.450 } 00:42:45.450 [2024-12-09 23:24:25.784924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:42:45.450 23:24:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:42:45.450 [2024-12-09 23:24:25.901319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:42:45.450 Running I/O for 4 seconds... 00:42:47.330 5645.00 IOPS, 22.05 MiB/s [2024-12-09T23:24:29.348Z] 6043.00 IOPS, 23.61 MiB/s [2024-12-09T23:24:29.918Z] 6205.00 IOPS, 24.24 MiB/s [2024-12-09T23:24:30.179Z] 6086.25 IOPS, 23.77 MiB/s 00:42:49.543 Latency(us) 00:42:49.543 [2024-12-09T23:24:30.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:49.543 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:42:49.543 ftl0 : 4.03 6067.59 23.70 0.00 0.00 21008.63 316.65 52428.80 00:42:49.543 [2024-12-09T23:24:30.179Z] =================================================================================================================== 00:42:49.543 [2024-12-09T23:24:30.179Z] Total : 6067.59 23.70 0.00 0.00 21008.63 0.00 52428.80 00:42:49.543 [2024-12-09 23:24:29.945264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:42:49.543 "results": [ 00:42:49.543 { 00:42:49.543 "job": "ftl0", 00:42:49.543 "core_mask": "0x1", 00:42:49.543 "workload": "randwrite", 00:42:49.543 "status": "finished", 00:42:49.543 "queue_depth": 128, 00:42:49.543 "io_size": 4096, 00:42:49.543 "runtime": 4.0334, 00:42:49.543 "iops": 6067.585659741161, 00:42:49.543 "mibps": 23.70150648336391, 00:42:49.543 "io_failed": 0, 00:42:49.543 "io_timeout": 0, 00:42:49.543 "avg_latency_us": 21008.6327862731, 00:42:49.543 "min_latency_us": 316.6523076923077, 00:42:49.543 "max_latency_us": 52428.8 00:42:49.543 } 00:42:49.543 ], 00:42:49.543 "core_count": 1 00:42:49.543 } 00:42:49.543 l0 00:42:49.543 23:24:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:42:49.543 [2024-12-09 23:24:30.055831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:42:49.543 Running I/O for 4 seconds... 00:42:51.866 5257.00 IOPS, 20.54 MiB/s [2024-12-09T23:24:33.073Z] 4839.50 IOPS, 18.90 MiB/s [2024-12-09T23:24:34.458Z] 4800.33 IOPS, 18.75 MiB/s [2024-12-09T23:24:34.458Z] 4722.75 IOPS, 18.45 MiB/s 00:42:53.822 Latency(us) 00:42:53.822 [2024-12-09T23:24:34.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.822 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:53.822 Verification LBA range: start 0x0 length 0x1400000 00:42:53.822 ftl0 : 4.02 4736.10 18.50 0.00 0.00 26943.34 281.99 40531.50 00:42:53.822 [2024-12-09T23:24:34.458Z] =================================================================================================================== 00:42:53.822 [2024-12-09T23:24:34.458Z] Total : 4736.10 18.50 0.00 0.00 26943.34 0.00 40531.50 00:42:53.822 { 00:42:53.822 "results": [ 00:42:53.822 { 00:42:53.822 "job": "ftl0", 00:42:53.822 "core_mask": "0x1", 00:42:53.822 "workload": "verify", 00:42:53.822 "status": "finished", 00:42:53.822 "verify_range": { 00:42:53.822 "start": 0, 00:42:53.822 "length": 20971520 00:42:53.822 }, 00:42:53.822 "queue_depth": 128, 00:42:53.822 "io_size": 4096, 00:42:53.822 "runtime": 4.015749, 00:42:53.822 "iops": 4736.1027793320745, 00:42:53.822 "mibps": 18.500401481765916, 00:42:53.822 "io_failed": 0, 00:42:53.822 "io_timeout": 0, 00:42:53.822 "avg_latency_us": 26943.337021844553, 00:42:53.822 "min_latency_us": 281.99384615384616, 00:42:53.822 "max_latency_us": 40531.49538461539 00:42:53.822 } 00:42:53.822 ], 00:42:53.822 "core_count": 1 00:42:53.822 } 00:42:53.822 [2024-12-09 23:24:34.089118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:42:53.822 23:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:42:53.822 [2024-12-09 23:24:34.308150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.822 [2024-12-09 23:24:34.308409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:53.822 [2024-12-09 23:24:34.308436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:53.822 [2024-12-09 23:24:34.308449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.822 [2024-12-09 23:24:34.308484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:53.822 [2024-12-09 23:24:34.311619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.822 [2024-12-09 23:24:34.311808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:53.822 [2024-12-09 23:24:34.311838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:42:53.822 [2024-12-09 23:24:34.311848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.822 [2024-12-09 23:24:34.315196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.822 [2024-12-09 23:24:34.315249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:53.822 [2024-12-09 23:24:34.315267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:42:53.822 [2024-12-09 23:24:34.315276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.083 [2024-12-09 23:24:34.539344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.539411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:54.084 [2024-12-09 23:24:34.539435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 224.039 ms 00:42:54.084 [2024-12-09 23:24:34.539445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.545629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.545678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:54.084 [2024-12-09 23:24:34.545694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:42:54.084 [2024-12-09 23:24:34.545706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.573647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.573862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:54.084 [2024-12-09 23:24:34.573891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.850 ms 00:42:54.084 [2024-12-09 23:24:34.573900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.592403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.592619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:54.084 [2024-12-09 23:24:34.592649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.449 ms 00:42:54.084 [2024-12-09 23:24:34.592658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.592839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.592852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:54.084 [2024-12-09 23:24:34.592867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:42:54.084 [2024-12-09 23:24:34.592875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.619796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.620024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:54.084 [2024-12-09 23:24:34.620052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.896 ms 00:42:54.084 [2024-12-09 23:24:34.620061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.646393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.646446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:54.084 [2024-12-09 23:24:34.646462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.204 ms 00:42:54.084 [2024-12-09 23:24:34.646470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.672527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.672579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:54.084 [2024-12-09 23:24:34.672595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.997 ms 00:42:54.084 [2024-12-09 23:24:34.672602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.698746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.084 [2024-12-09 23:24:34.698798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:54.084 [2024-12-09 23:24:34.698817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.028 ms 00:42:54.084 [2024-12-09 23:24:34.698824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.084 [2024-12-09 23:24:34.698877] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:54.084 [2024-12-09 23:24:34.698893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.698978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:54.084 [2024-12-09 23:24:34.699475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:54.085 [2024-12-09 23:24:34.699827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:54.085 [2024-12-09 23:24:34.699837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec86b994-753a-40a0-85b1-4b080e9c915c 00:42:54.085 [2024-12-09 23:24:34.699849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:54.085 [2024-12-09 23:24:34.699858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:54.085 [2024-12-09 23:24:34.699865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:54.085 [2024-12-09 23:24:34.699875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:54.085 [2024-12-09 23:24:34.699882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:54.085 [2024-12-09 23:24:34.699892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:54.085 [2024-12-09 23:24:34.699899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:54.085 [2024-12-09 23:24:34.699910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:54.085 [2024-12-09 23:24:34.699917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:54.085 [2024-12-09 23:24:34.699927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.085 [2024-12-09 23:24:34.699934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:54.085 [2024-12-09 23:24:34.699945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:42:54.085 [2024-12-09 23:24:34.699952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.085 [2024-12-09 23:24:34.713803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.085 [2024-12-09 23:24:34.713850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:54.085 [2024-12-09 23:24:34.713864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.774 ms 00:42:54.085 [2024-12-09 23:24:34.713873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.085 [2024-12-09 23:24:34.714301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:54.085 [2024-12-09 23:24:34.714313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:54.085 [2024-12-09 23:24:34.714325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:42:54.085 [2024-12-09 23:24:34.714333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.754123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.754173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:54.347 [2024-12-09 23:24:34.754192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.754201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.754273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.754281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:54.347 [2024-12-09 23:24:34.754292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.754300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.754384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.754395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:54.347 [2024-12-09 23:24:34.754406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.754414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.754432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.754441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:54.347 [2024-12-09 23:24:34.754451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.754458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.841434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.841503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:54.347 [2024-12-09 23:24:34.841523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.841532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:54.347 [2024-12-09 23:24:34.912452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.912461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:54.347 [2024-12-09 23:24:34.912606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.912615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:54.347 [2024-12-09 23:24:34.912685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.912693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:54.347 [2024-12-09 23:24:34.912829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.912837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:54.347 [2024-12-09 23:24:34.912893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.912901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.912944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.912956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:54.347 [2024-12-09 23:24:34.912967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.913022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.913073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:54.347 [2024-12-09 23:24:34.913084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:54.347 [2024-12-09 23:24:34.913096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:54.347 [2024-12-09 23:24:34.913104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:54.347 [2024-12-09 23:24:34.913249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 605.055 ms, result 0 00:42:54.347 true 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76164 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76164 ']' 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76164 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76164 00:42:54.347 killing process with pid 76164 00:42:54.347 Received shutdown signal, test time was about 4.000000 seconds 00:42:54.347 00:42:54.347 Latency(us) 00:42:54.347 [2024-12-09T23:24:34.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.347 [2024-12-09T23:24:34.983Z] =================================================================================================================== 00:42:54.347 [2024-12-09T23:24:34.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:54.347 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:54.348 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76164' 00:42:54.348 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76164 00:42:54.348 23:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76164 00:42:59.629 Remove shared memory files 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:42:59.629 ************************************ 00:42:59.629 END TEST ftl_bdevperf 00:42:59.629 ************************************ 00:42:59.629 00:42:59.629 real 0m26.126s 00:42:59.629 user 0m28.781s 00:42:59.629 sys 0m1.044s 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:59.629 23:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:59.629 23:24:39 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:42:59.629 23:24:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:59.629 23:24:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:59.629 23:24:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:59.629 ************************************ 00:42:59.629 START TEST ftl_trim 00:42:59.629 ************************************ 00:42:59.629 23:24:39 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:42:59.629 * Looking for test storage... 00:42:59.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:59.629 23:24:39 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:59.629 23:24:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:42:59.629 23:24:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:59.629 23:24:40 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.629 --rc genhtml_branch_coverage=1 00:42:59.629 --rc genhtml_function_coverage=1 00:42:59.629 --rc genhtml_legend=1 00:42:59.629 --rc geninfo_all_blocks=1 00:42:59.629 --rc geninfo_unexecuted_blocks=1 00:42:59.629 00:42:59.629 ' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.629 --rc genhtml_branch_coverage=1 00:42:59.629 --rc genhtml_function_coverage=1 00:42:59.629 --rc genhtml_legend=1 00:42:59.629 --rc geninfo_all_blocks=1 00:42:59.629 --rc geninfo_unexecuted_blocks=1 00:42:59.629 00:42:59.629 ' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.629 --rc genhtml_branch_coverage=1 00:42:59.629 --rc genhtml_function_coverage=1 00:42:59.629 --rc genhtml_legend=1 00:42:59.629 --rc geninfo_all_blocks=1 00:42:59.629 --rc geninfo_unexecuted_blocks=1 00:42:59.629 00:42:59.629 ' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.629 --rc genhtml_branch_coverage=1 00:42:59.629 --rc genhtml_function_coverage=1 00:42:59.629 --rc genhtml_legend=1 00:42:59.629 --rc geninfo_all_blocks=1 00:42:59.629 --rc geninfo_unexecuted_blocks=1 00:42:59.629 00:42:59.629 ' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76506 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76506 00:42:59.629 23:24:40 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76506 ']' 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:59.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:59.629 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:59.630 23:24:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:42:59.630 [2024-12-09 23:24:40.166534] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:59.630 [2024-12-09 23:24:40.166886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:42:59.891 [2024-12-09 23:24:40.335473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:59.891 [2024-12-09 23:24:40.472721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:59.891 [2024-12-09 23:24:40.473105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:59.891 [2024-12-09 23:24:40.473312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:00.826 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:00.826 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:43:00.826 23:24:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:43:00.826 23:24:41 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:43:00.826 23:24:41 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:43:00.827 23:24:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:43:00.827 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:43:00.827 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:00.827 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:43:00.827 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:43:00.827 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:01.085 { 00:43:01.085 "name": "nvme0n1", 00:43:01.085 "aliases": [ 00:43:01.085 "52ed3d7f-3745-4e9a-81d4-b50a3e2ef977" 00:43:01.085 ], 00:43:01.085 "product_name": "NVMe disk", 00:43:01.085 "block_size": 4096, 00:43:01.085 "num_blocks": 1310720, 00:43:01.085 "uuid": "52ed3d7f-3745-4e9a-81d4-b50a3e2ef977", 00:43:01.085 "numa_id": -1, 00:43:01.085 "assigned_rate_limits": { 00:43:01.085 "rw_ios_per_sec": 0, 00:43:01.085 "rw_mbytes_per_sec": 0, 00:43:01.085 "r_mbytes_per_sec": 0, 00:43:01.085 "w_mbytes_per_sec": 0 00:43:01.085 }, 00:43:01.085 "claimed": true, 00:43:01.085 "claim_type": "read_many_write_one", 00:43:01.085 "zoned": false, 00:43:01.085 "supported_io_types": { 00:43:01.085 "read": true, 00:43:01.085 "write": true, 00:43:01.085 "unmap": true, 00:43:01.085 "flush": true, 00:43:01.085 "reset": true, 00:43:01.085 "nvme_admin": true, 00:43:01.085 "nvme_io": true, 00:43:01.085 "nvme_io_md": false, 00:43:01.085 "write_zeroes": true, 00:43:01.085 "zcopy": false, 00:43:01.085 "get_zone_info": false, 00:43:01.085 "zone_management": false, 00:43:01.085 "zone_append": false, 00:43:01.085 "compare": true, 00:43:01.085 "compare_and_write": false, 00:43:01.085 "abort": true, 00:43:01.085 "seek_hole": false, 00:43:01.085 "seek_data": false, 00:43:01.085 "copy": true, 00:43:01.085 "nvme_iov_md": false 00:43:01.085 }, 00:43:01.085 "driver_specific": { 00:43:01.085 "nvme": [ 00:43:01.085 { 00:43:01.085 "pci_address": "0000:00:11.0", 00:43:01.085 "trid": { 00:43:01.085 "trtype": "PCIe", 00:43:01.085 "traddr": "0000:00:11.0" 00:43:01.085 }, 00:43:01.085 "ctrlr_data": { 00:43:01.085 "cntlid": 0, 00:43:01.085 "vendor_id": "0x1b36", 00:43:01.085 "model_number": "QEMU NVMe Ctrl", 00:43:01.085 "serial_number": "12341", 00:43:01.085 "firmware_revision": "8.0.0", 00:43:01.085 "subnqn": "nqn.2019-08.org.qemu:12341", 00:43:01.085 "oacs": { 00:43:01.085 "security": 0, 00:43:01.085 "format": 1, 00:43:01.085 "firmware": 0, 00:43:01.085 "ns_manage": 1 00:43:01.085 }, 00:43:01.085 "multi_ctrlr": false, 00:43:01.085 "ana_reporting": false 00:43:01.085 }, 00:43:01.085 "vs": { 00:43:01.085 "nvme_version": "1.4" 00:43:01.085 }, 00:43:01.085 "ns_data": { 00:43:01.085 "id": 1, 00:43:01.085 "can_share": false 00:43:01.085 } 00:43:01.085 } 00:43:01.085 ], 00:43:01.085 "mp_policy": "active_passive" 00:43:01.085 } 00:43:01.085 } 00:43:01.085 ]' 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:43:01.085 23:24:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:43:01.085 23:24:41 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:43:01.085 23:24:41 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:43:01.085 23:24:41 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:43:01.085 23:24:41 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:01.085 23:24:41 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:43:01.346 23:24:41 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=4a27a4b7-fa6f-49f4-b336-1f9a5c487d24 00:43:01.346 23:24:41 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:43:01.346 23:24:41 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a27a4b7-fa6f-49f4-b336-1f9a5c487d24 00:43:01.664 23:24:42 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d9d5ea24-7ad4-48a7-901d-f110743336ff 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d9d5ea24-7ad4-48a7-901d-f110743336ff 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:43:01.925 23:24:42 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:01.925 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:01.925 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:01.925 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:43:01.925 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:43:01.925 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:02.187 { 00:43:02.187 "name": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:02.187 "aliases": [ 00:43:02.187 "lvs/nvme0n1p0" 00:43:02.187 ], 00:43:02.187 "product_name": "Logical Volume", 00:43:02.187 "block_size": 4096, 00:43:02.187 "num_blocks": 26476544, 00:43:02.187 "uuid": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:02.187 "assigned_rate_limits": { 00:43:02.187 "rw_ios_per_sec": 0, 00:43:02.187 "rw_mbytes_per_sec": 0, 00:43:02.187 "r_mbytes_per_sec": 0, 00:43:02.187 "w_mbytes_per_sec": 0 00:43:02.187 }, 00:43:02.187 "claimed": false, 00:43:02.187 "zoned": false, 00:43:02.187 "supported_io_types": { 00:43:02.187 "read": true, 00:43:02.187 "write": true, 00:43:02.187 "unmap": true, 00:43:02.187 "flush": false, 00:43:02.187 "reset": true, 00:43:02.187 "nvme_admin": false, 00:43:02.187 "nvme_io": false, 00:43:02.187 "nvme_io_md": false, 00:43:02.187 "write_zeroes": true, 00:43:02.187 "zcopy": false, 00:43:02.187 "get_zone_info": false, 00:43:02.187 "zone_management": false, 00:43:02.187 "zone_append": false, 00:43:02.187 "compare": false, 00:43:02.187 "compare_and_write": false, 00:43:02.187 "abort": false, 00:43:02.187 "seek_hole": true, 00:43:02.187 "seek_data": true, 00:43:02.187 "copy": false, 00:43:02.187 "nvme_iov_md": false 00:43:02.187 }, 00:43:02.187 "driver_specific": { 00:43:02.187 "lvol": { 00:43:02.187 "lvol_store_uuid": "d9d5ea24-7ad4-48a7-901d-f110743336ff", 00:43:02.187 "base_bdev": "nvme0n1", 00:43:02.187 "thin_provision": true, 00:43:02.187 "num_allocated_clusters": 0, 00:43:02.187 "snapshot": false, 00:43:02.187 "clone": false, 00:43:02.187 "esnap_clone": false 00:43:02.187 } 00:43:02.187 } 00:43:02.187 } 00:43:02.187 ]' 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:02.187 23:24:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:43:02.187 23:24:42 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:43:02.187 23:24:42 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:43:02.187 23:24:42 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:43:02.447 23:24:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:43:02.447 23:24:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:43:02.447 23:24:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.447 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.447 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:02.447 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:43:02.447 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:43:02.447 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.708 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:02.708 { 00:43:02.708 "name": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:02.708 "aliases": [ 00:43:02.708 "lvs/nvme0n1p0" 00:43:02.708 ], 00:43:02.708 "product_name": "Logical Volume", 00:43:02.708 "block_size": 4096, 00:43:02.708 "num_blocks": 26476544, 00:43:02.708 "uuid": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:02.708 "assigned_rate_limits": { 00:43:02.708 "rw_ios_per_sec": 0, 00:43:02.708 "rw_mbytes_per_sec": 0, 00:43:02.708 "r_mbytes_per_sec": 0, 00:43:02.708 "w_mbytes_per_sec": 0 00:43:02.708 }, 00:43:02.708 "claimed": false, 00:43:02.708 "zoned": false, 00:43:02.708 "supported_io_types": { 00:43:02.708 "read": true, 00:43:02.708 "write": true, 00:43:02.708 "unmap": true, 00:43:02.708 "flush": false, 00:43:02.708 "reset": true, 00:43:02.708 "nvme_admin": false, 00:43:02.708 "nvme_io": false, 00:43:02.708 "nvme_io_md": false, 00:43:02.708 "write_zeroes": true, 00:43:02.708 "zcopy": false, 00:43:02.708 "get_zone_info": false, 00:43:02.708 "zone_management": false, 00:43:02.708 "zone_append": false, 00:43:02.708 "compare": false, 00:43:02.708 "compare_and_write": false, 00:43:02.708 "abort": false, 00:43:02.708 "seek_hole": true, 00:43:02.708 "seek_data": true, 00:43:02.708 "copy": false, 00:43:02.708 "nvme_iov_md": false 00:43:02.708 }, 00:43:02.708 "driver_specific": { 00:43:02.708 "lvol": { 00:43:02.708 "lvol_store_uuid": "d9d5ea24-7ad4-48a7-901d-f110743336ff", 00:43:02.708 "base_bdev": "nvme0n1", 00:43:02.708 "thin_provision": true, 00:43:02.708 "num_allocated_clusters": 0, 00:43:02.708 "snapshot": false, 00:43:02.708 "clone": false, 00:43:02.708 "esnap_clone": false 00:43:02.708 } 00:43:02.708 } 00:43:02.708 } 00:43:02.708 ]' 00:43:02.708 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:02.708 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:43:02.708 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:43:02.970 23:24:43 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:43:02.970 23:24:43 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:43:02.970 23:24:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:43:02.970 23:24:43 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:43:02.970 23:24:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:43:02.970 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 489749fb-b535-405e-a873-6fb71fc2e3aa 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:03.228 { 00:43:03.228 "name": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:03.228 "aliases": [ 00:43:03.228 "lvs/nvme0n1p0" 00:43:03.228 ], 00:43:03.228 "product_name": "Logical Volume", 00:43:03.228 "block_size": 4096, 00:43:03.228 "num_blocks": 26476544, 00:43:03.228 "uuid": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:03.228 "assigned_rate_limits": { 00:43:03.228 "rw_ios_per_sec": 0, 00:43:03.228 "rw_mbytes_per_sec": 0, 00:43:03.228 "r_mbytes_per_sec": 0, 00:43:03.228 "w_mbytes_per_sec": 0 00:43:03.228 }, 00:43:03.228 "claimed": false, 00:43:03.228 "zoned": false, 00:43:03.228 "supported_io_types": { 00:43:03.228 "read": true, 00:43:03.228 "write": true, 00:43:03.228 "unmap": true, 00:43:03.228 "flush": false, 00:43:03.228 "reset": true, 00:43:03.228 "nvme_admin": false, 00:43:03.228 "nvme_io": false, 00:43:03.228 "nvme_io_md": false, 00:43:03.228 "write_zeroes": true, 00:43:03.228 "zcopy": false, 00:43:03.228 "get_zone_info": false, 00:43:03.228 "zone_management": false, 00:43:03.228 "zone_append": false, 00:43:03.228 "compare": false, 00:43:03.228 "compare_and_write": false, 00:43:03.228 "abort": false, 00:43:03.228 "seek_hole": true, 00:43:03.228 "seek_data": true, 00:43:03.228 "copy": false, 00:43:03.228 "nvme_iov_md": false 00:43:03.228 }, 00:43:03.228 "driver_specific": { 00:43:03.228 "lvol": { 00:43:03.228 "lvol_store_uuid": "d9d5ea24-7ad4-48a7-901d-f110743336ff", 00:43:03.228 "base_bdev": "nvme0n1", 00:43:03.228 "thin_provision": true, 00:43:03.228 "num_allocated_clusters": 0, 00:43:03.228 "snapshot": false, 00:43:03.228 "clone": false, 00:43:03.228 "esnap_clone": false 00:43:03.228 } 00:43:03.228 } 00:43:03.228 } 00:43:03.228 ]' 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:03.228 23:24:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:43:03.228 23:24:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:43:03.228 23:24:43 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 489749fb-b535-405e-a873-6fb71fc2e3aa -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:43:03.487 [2024-12-09 23:24:44.026727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.026890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:03.487 [2024-12-09 23:24:44.026915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:03.487 [2024-12-09 23:24:44.026924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.029741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.029774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:03.487 [2024-12-09 23:24:44.029786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.779 ms 00:43:03.487 [2024-12-09 23:24:44.029793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.029894] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:03.487 [2024-12-09 23:24:44.030732] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:03.487 [2024-12-09 23:24:44.030840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.030894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:03.487 [2024-12-09 23:24:44.030923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:43:03.487 [2024-12-09 23:24:44.030946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.031097] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:03.487 [2024-12-09 23:24:44.032362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.032391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:43:03.487 [2024-12-09 23:24:44.032402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:43:03.487 [2024-12-09 23:24:44.032413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.037836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.037952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:03.487 [2024-12-09 23:24:44.037968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:43:03.487 [2024-12-09 23:24:44.037977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.038121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.038134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:03.487 [2024-12-09 23:24:44.038143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:43:03.487 [2024-12-09 23:24:44.038155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.038185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.038194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:03.487 [2024-12-09 23:24:44.038202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:03.487 [2024-12-09 23:24:44.038213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.038242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:03.487 [2024-12-09 23:24:44.041819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.041852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:03.487 [2024-12-09 23:24:44.041865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.580 ms 00:43:03.487 [2024-12-09 23:24:44.041872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.487 [2024-12-09 23:24:44.041916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.487 [2024-12-09 23:24:44.041937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:03.487 [2024-12-09 23:24:44.041947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:03.488 [2024-12-09 23:24:44.041954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.488 [2024-12-09 23:24:44.041996] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:43:03.488 [2024-12-09 23:24:44.042146] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:03.488 [2024-12-09 23:24:44.042161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:03.488 [2024-12-09 23:24:44.042172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:03.488 [2024-12-09 23:24:44.042183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042201] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:03.488 [2024-12-09 23:24:44.042208] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:03.488 [2024-12-09 23:24:44.042219] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:03.488 [2024-12-09 23:24:44.042227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:03.488 [2024-12-09 23:24:44.042236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.488 [2024-12-09 23:24:44.042244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:03.488 [2024-12-09 23:24:44.042253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:43:03.488 [2024-12-09 23:24:44.042260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.488 [2024-12-09 23:24:44.042371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.488 [2024-12-09 23:24:44.042380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:03.488 [2024-12-09 23:24:44.042390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:43:03.488 [2024-12-09 23:24:44.042397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.488 [2024-12-09 23:24:44.042530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:03.488 [2024-12-09 23:24:44.042540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:03.488 [2024-12-09 23:24:44.042550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:03.488 [2024-12-09 23:24:44.042574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:03.488 [2024-12-09 23:24:44.042597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:03.488 [2024-12-09 23:24:44.042614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:03.488 [2024-12-09 23:24:44.042620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:03.488 [2024-12-09 23:24:44.042628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:03.488 [2024-12-09 23:24:44.042635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:03.488 [2024-12-09 23:24:44.042643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:03.488 [2024-12-09 23:24:44.042649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:03.488 [2024-12-09 23:24:44.042665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:03.488 [2024-12-09 23:24:44.042688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:03.488 [2024-12-09 23:24:44.042710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:03.488 [2024-12-09 23:24:44.042733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:03.488 [2024-12-09 23:24:44.042753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:03.488 [2024-12-09 23:24:44.042776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:03.488 [2024-12-09 23:24:44.042791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:03.488 [2024-12-09 23:24:44.042797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:03.488 [2024-12-09 23:24:44.042806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:03.488 [2024-12-09 23:24:44.042812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:03.488 [2024-12-09 23:24:44.042820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:03.488 [2024-12-09 23:24:44.042826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:03.488 [2024-12-09 23:24:44.042840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:03.488 [2024-12-09 23:24:44.042848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:03.488 [2024-12-09 23:24:44.042863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:03.488 [2024-12-09 23:24:44.042870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:03.488 [2024-12-09 23:24:44.042886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:03.488 [2024-12-09 23:24:44.042895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:03.488 [2024-12-09 23:24:44.042902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:03.488 [2024-12-09 23:24:44.042910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:03.488 [2024-12-09 23:24:44.042916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:03.488 [2024-12-09 23:24:44.042924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:03.488 [2024-12-09 23:24:44.042933] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:03.488 [2024-12-09 23:24:44.042945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.042955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:03.488 [2024-12-09 23:24:44.042964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:03.488 [2024-12-09 23:24:44.042971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:03.488 [2024-12-09 23:24:44.042979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:03.488 [2024-12-09 23:24:44.042998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:03.488 [2024-12-09 23:24:44.043006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:03.488 [2024-12-09 23:24:44.043013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:03.488 [2024-12-09 23:24:44.043024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:03.488 [2024-12-09 23:24:44.043031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:03.488 [2024-12-09 23:24:44.043041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:03.488 [2024-12-09 23:24:44.043081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:03.488 [2024-12-09 23:24:44.043093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:03.488 [2024-12-09 23:24:44.043110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:03.488 [2024-12-09 23:24:44.043118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:03.488 [2024-12-09 23:24:44.043126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:03.488 [2024-12-09 23:24:44.043134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.488 [2024-12-09 23:24:44.043143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:03.488 [2024-12-09 23:24:44.043150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:43:03.488 [2024-12-09 23:24:44.043159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.488 [2024-12-09 23:24:44.043228] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:43:03.488 [2024-12-09 23:24:44.043240] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:43:07.688 [2024-12-09 23:24:47.521385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.521444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:43:07.688 [2024-12-09 23:24:47.521459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3478.142 ms 00:43:07.688 [2024-12-09 23:24:47.521469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.546698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.546742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:07.688 [2024-12-09 23:24:47.546753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.980 ms 00:43:07.688 [2024-12-09 23:24:47.546763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.546881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.546893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:07.688 [2024-12-09 23:24:47.546916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:43:07.688 [2024-12-09 23:24:47.546931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.587898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.587944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:07.688 [2024-12-09 23:24:47.587957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.936 ms 00:43:07.688 [2024-12-09 23:24:47.587968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.588077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.588092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:07.688 [2024-12-09 23:24:47.588101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:07.688 [2024-12-09 23:24:47.588110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.588422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.588439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:07.688 [2024-12-09 23:24:47.588449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:43:07.688 [2024-12-09 23:24:47.588458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.588570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.588586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:07.688 [2024-12-09 23:24:47.588609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:43:07.688 [2024-12-09 23:24:47.588620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.602799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.602953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:07.688 [2024-12-09 23:24:47.602970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:43:07.688 [2024-12-09 23:24:47.602979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.614263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:07.688 [2024-12-09 23:24:47.628300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.628332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:07.688 [2024-12-09 23:24:47.628345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.212 ms 00:43:07.688 [2024-12-09 23:24:47.628354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.697949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.698137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:43:07.688 [2024-12-09 23:24:47.698160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.528 ms 00:43:07.688 [2024-12-09 23:24:47.698169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.698360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.698371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:07.688 [2024-12-09 23:24:47.698384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:43:07.688 [2024-12-09 23:24:47.698392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.721160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.721287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:43:07.688 [2024-12-09 23:24:47.721307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.736 ms 00:43:07.688 [2024-12-09 23:24:47.721318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.743888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.743918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:43:07.688 [2024-12-09 23:24:47.743931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.516 ms 00:43:07.688 [2024-12-09 23:24:47.743939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.744530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.744552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:07.688 [2024-12-09 23:24:47.744563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:43:07.688 [2024-12-09 23:24:47.744570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.810852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.810889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:43:07.688 [2024-12-09 23:24:47.810904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.250 ms 00:43:07.688 [2024-12-09 23:24:47.810913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.834351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.834479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:43:07.688 [2024-12-09 23:24:47.834499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.368 ms 00:43:07.688 [2024-12-09 23:24:47.834509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.857027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.857170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:43:07.688 [2024-12-09 23:24:47.857188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.474 ms 00:43:07.688 [2024-12-09 23:24:47.857196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.880179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.880305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:07.688 [2024-12-09 23:24:47.880324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.922 ms 00:43:07.688 [2024-12-09 23:24:47.880332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.880379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.880389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:07.688 [2024-12-09 23:24:47.880401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:07.688 [2024-12-09 23:24:47.880408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.880481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.688 [2024-12-09 23:24:47.880490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:07.688 [2024-12-09 23:24:47.880499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:43:07.688 [2024-12-09 23:24:47.880506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.688 [2024-12-09 23:24:47.881262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:07.688 [2024-12-09 23:24:47.884161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3854.255 ms, result 0 00:43:07.688 [2024-12-09 23:24:47.885166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:07.688 { 00:43:07.689 "name": "ftl0", 00:43:07.689 "uuid": "9073a2f6-168e-47e0-b22f-dc807919fd17" 00:43:07.689 } 00:43:07.689 23:24:47 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:07.689 23:24:47 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:43:07.689 23:24:48 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:43:07.689 [ 00:43:07.689 { 00:43:07.689 "name": "ftl0", 00:43:07.689 "aliases": [ 00:43:07.689 "9073a2f6-168e-47e0-b22f-dc807919fd17" 00:43:07.689 ], 00:43:07.689 "product_name": "FTL disk", 00:43:07.689 "block_size": 4096, 00:43:07.689 "num_blocks": 23592960, 00:43:07.689 "uuid": "9073a2f6-168e-47e0-b22f-dc807919fd17", 00:43:07.689 "assigned_rate_limits": { 00:43:07.689 "rw_ios_per_sec": 0, 00:43:07.689 "rw_mbytes_per_sec": 0, 00:43:07.689 "r_mbytes_per_sec": 0, 00:43:07.689 "w_mbytes_per_sec": 0 00:43:07.689 }, 00:43:07.689 "claimed": false, 00:43:07.689 "zoned": false, 00:43:07.689 "supported_io_types": { 00:43:07.689 "read": true, 00:43:07.689 "write": true, 00:43:07.689 "unmap": true, 00:43:07.689 "flush": true, 00:43:07.689 "reset": false, 00:43:07.689 "nvme_admin": false, 00:43:07.689 "nvme_io": false, 00:43:07.689 "nvme_io_md": false, 00:43:07.689 "write_zeroes": true, 00:43:07.689 "zcopy": false, 00:43:07.689 "get_zone_info": false, 00:43:07.689 "zone_management": false, 00:43:07.689 "zone_append": false, 00:43:07.689 "compare": false, 00:43:07.689 "compare_and_write": false, 00:43:07.689 "abort": false, 00:43:07.689 "seek_hole": false, 00:43:07.689 "seek_data": false, 00:43:07.689 "copy": false, 00:43:07.689 "nvme_iov_md": false 00:43:07.689 }, 00:43:07.689 "driver_specific": { 00:43:07.689 "ftl": { 00:43:07.689 "base_bdev": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:07.689 "cache": "nvc0n1p0" 00:43:07.689 } 00:43:07.689 } 00:43:07.689 } 00:43:07.689 ] 00:43:07.689 23:24:48 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:43:07.689 23:24:48 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:43:07.689 23:24:48 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:43:07.956 23:24:48 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:43:07.956 23:24:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:43:08.243 23:24:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:43:08.243 { 00:43:08.243 "name": "ftl0", 00:43:08.243 "aliases": [ 00:43:08.243 "9073a2f6-168e-47e0-b22f-dc807919fd17" 00:43:08.243 ], 00:43:08.243 "product_name": "FTL disk", 00:43:08.243 "block_size": 4096, 00:43:08.243 "num_blocks": 23592960, 00:43:08.243 "uuid": "9073a2f6-168e-47e0-b22f-dc807919fd17", 00:43:08.243 "assigned_rate_limits": { 00:43:08.243 "rw_ios_per_sec": 0, 00:43:08.243 "rw_mbytes_per_sec": 0, 00:43:08.243 "r_mbytes_per_sec": 0, 00:43:08.243 "w_mbytes_per_sec": 0 00:43:08.243 }, 00:43:08.243 "claimed": false, 00:43:08.243 "zoned": false, 00:43:08.243 "supported_io_types": { 00:43:08.243 "read": true, 00:43:08.243 "write": true, 00:43:08.243 "unmap": true, 00:43:08.243 "flush": true, 00:43:08.243 "reset": false, 00:43:08.243 "nvme_admin": false, 00:43:08.244 "nvme_io": false, 00:43:08.244 "nvme_io_md": false, 00:43:08.244 "write_zeroes": true, 00:43:08.244 "zcopy": false, 00:43:08.244 "get_zone_info": false, 00:43:08.244 "zone_management": false, 00:43:08.244 "zone_append": false, 00:43:08.244 "compare": false, 00:43:08.244 "compare_and_write": false, 00:43:08.244 "abort": false, 00:43:08.244 "seek_hole": false, 00:43:08.244 "seek_data": false, 00:43:08.244 "copy": false, 00:43:08.244 "nvme_iov_md": false 00:43:08.244 }, 00:43:08.244 "driver_specific": { 00:43:08.244 "ftl": { 00:43:08.244 "base_bdev": "489749fb-b535-405e-a873-6fb71fc2e3aa", 00:43:08.244 "cache": "nvc0n1p0" 00:43:08.244 } 00:43:08.244 } 00:43:08.244 } 00:43:08.244 ]' 00:43:08.244 23:24:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:43:08.244 23:24:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:43:08.244 23:24:48 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:43:08.517 [2024-12-09 23:24:48.908182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.908229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:08.517 [2024-12-09 23:24:48.908243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:08.517 [2024-12-09 23:24:48.908253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.908284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:08.517 [2024-12-09 23:24:48.910875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.910905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:08.517 [2024-12-09 23:24:48.910922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:43:08.517 [2024-12-09 23:24:48.910931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.911377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.911394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:08.517 [2024-12-09 23:24:48.911404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:43:08.517 [2024-12-09 23:24:48.911412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.915069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.915091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:08.517 [2024-12-09 23:24:48.915103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:43:08.517 [2024-12-09 23:24:48.915112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.922153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.922189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:08.517 [2024-12-09 23:24:48.922201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:43:08.517 [2024-12-09 23:24:48.922208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.946034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.946070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:08.517 [2024-12-09 23:24:48.946086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.744 ms 00:43:08.517 [2024-12-09 23:24:48.946094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.961649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.517 [2024-12-09 23:24:48.961682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:08.517 [2024-12-09 23:24:48.961698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.491 ms 00:43:08.517 [2024-12-09 23:24:48.961706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.517 [2024-12-09 23:24:48.961901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.518 [2024-12-09 23:24:48.961912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:08.518 [2024-12-09 23:24:48.961922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:43:08.518 [2024-12-09 23:24:48.961929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.518 [2024-12-09 23:24:48.985087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.518 [2024-12-09 23:24:48.985119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:08.518 [2024-12-09 23:24:48.985132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.133 ms 00:43:08.518 [2024-12-09 23:24:48.985139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.518 [2024-12-09 23:24:49.008165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.518 [2024-12-09 23:24:49.008195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:08.518 [2024-12-09 23:24:49.008210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.973 ms 00:43:08.518 [2024-12-09 23:24:49.008217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.518 [2024-12-09 23:24:49.030992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.518 [2024-12-09 23:24:49.031031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:08.518 [2024-12-09 23:24:49.031043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:43:08.518 [2024-12-09 23:24:49.031050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.518 [2024-12-09 23:24:49.053531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.518 [2024-12-09 23:24:49.053561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:08.518 [2024-12-09 23:24:49.053574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.389 ms 00:43:08.518 [2024-12-09 23:24:49.053582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.518 [2024-12-09 23:24:49.053641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:08.518 [2024-12-09 23:24:49.053656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.053996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:08.518 [2024-12-09 23:24:49.054308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:08.519 [2024-12-09 23:24:49.054520] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:08.519 [2024-12-09 23:24:49.054531] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:08.519 [2024-12-09 23:24:49.054539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:08.519 [2024-12-09 23:24:49.054547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:08.519 [2024-12-09 23:24:49.054556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:08.519 [2024-12-09 23:24:49.054565] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:08.519 [2024-12-09 23:24:49.054572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:08.519 [2024-12-09 23:24:49.054581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:08.519 [2024-12-09 23:24:49.054588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:08.519 [2024-12-09 23:24:49.054595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:08.519 [2024-12-09 23:24:49.054601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:08.519 [2024-12-09 23:24:49.054610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.519 [2024-12-09 23:24:49.054617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:08.519 [2024-12-09 23:24:49.054627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:43:08.519 [2024-12-09 23:24:49.054635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.066960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.519 [2024-12-09 23:24:49.067003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:08.519 [2024-12-09 23:24:49.067017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.290 ms 00:43:08.519 [2024-12-09 23:24:49.067024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.067393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:08.519 [2024-12-09 23:24:49.067415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:08.519 [2024-12-09 23:24:49.067425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:43:08.519 [2024-12-09 23:24:49.067433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.110644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.519 [2024-12-09 23:24:49.110678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:08.519 [2024-12-09 23:24:49.110691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.519 [2024-12-09 23:24:49.110698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.110795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.519 [2024-12-09 23:24:49.110805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:08.519 [2024-12-09 23:24:49.110815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.519 [2024-12-09 23:24:49.110822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.110884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.519 [2024-12-09 23:24:49.110895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:08.519 [2024-12-09 23:24:49.110907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.519 [2024-12-09 23:24:49.110915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.519 [2024-12-09 23:24:49.110941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.519 [2024-12-09 23:24:49.110948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:08.519 [2024-12-09 23:24:49.110958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.519 [2024-12-09 23:24:49.110964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.190424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.190468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:08.781 [2024-12-09 23:24:49.190481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.190488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:08.781 [2024-12-09 23:24:49.252289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:08.781 [2024-12-09 23:24:49.252414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:08.781 [2024-12-09 23:24:49.252481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:08.781 [2024-12-09 23:24:49.252610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:08.781 [2024-12-09 23:24:49.252687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:08.781 [2024-12-09 23:24:49.252757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.252819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:08.781 [2024-12-09 23:24:49.252828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:08.781 [2024-12-09 23:24:49.252837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:08.781 [2024-12-09 23:24:49.252844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:08.781 [2024-12-09 23:24:49.253020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.803 ms, result 0 00:43:08.781 true 00:43:08.781 23:24:49 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76506 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76506 ']' 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76506 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76506 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:08.781 killing process with pid 76506 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76506' 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76506 00:43:08.781 23:24:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76506 00:43:15.364 23:24:55 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:43:15.625 65536+0 records in 00:43:15.625 65536+0 records out 00:43:15.625 268435456 bytes (268 MB, 256 MiB) copied, 1.10276 s, 243 MB/s 00:43:15.625 23:24:56 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:15.625 [2024-12-09 23:24:56.187720] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:15.625 [2024-12-09 23:24:56.187870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76699 ] 00:43:15.886 [2024-12-09 23:24:56.350021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.886 [2024-12-09 23:24:56.487977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.460 [2024-12-09 23:24:56.795184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:16.460 [2024-12-09 23:24:56.795278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:16.460 [2024-12-09 23:24:56.959522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.959591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:16.460 [2024-12-09 23:24:56.959608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:16.460 [2024-12-09 23:24:56.959617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.460 [2024-12-09 23:24:56.962653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.962712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:16.460 [2024-12-09 23:24:56.962724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:43:16.460 [2024-12-09 23:24:56.962732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.460 [2024-12-09 23:24:56.962861] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:16.460 [2024-12-09 23:24:56.963600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:16.460 [2024-12-09 23:24:56.963635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.963644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:16.460 [2024-12-09 23:24:56.963655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:43:16.460 [2024-12-09 23:24:56.963663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.460 [2024-12-09 23:24:56.965533] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:16.460 [2024-12-09 23:24:56.980628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.980688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:16.460 [2024-12-09 23:24:56.980704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.098 ms 00:43:16.460 [2024-12-09 23:24:56.980713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.460 [2024-12-09 23:24:56.980853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.980868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:16.460 [2024-12-09 23:24:56.980879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:43:16.460 [2024-12-09 23:24:56.980888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.460 [2024-12-09 23:24:56.989614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.460 [2024-12-09 23:24:56.989696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:16.460 [2024-12-09 23:24:56.989708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.678 ms 00:43:16.460 [2024-12-09 23:24:56.989716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.989831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.989843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:16.461 [2024-12-09 23:24:56.989852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:43:16.461 [2024-12-09 23:24:56.989860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.989890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.989900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:16.461 [2024-12-09 23:24:56.989908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:16.461 [2024-12-09 23:24:56.989916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.989940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:16.461 [2024-12-09 23:24:56.993960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.994064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:16.461 [2024-12-09 23:24:56.994076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.025 ms 00:43:16.461 [2024-12-09 23:24:56.994085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.994170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.994181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:16.461 [2024-12-09 23:24:56.994190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:43:16.461 [2024-12-09 23:24:56.994198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.994225] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:16.461 [2024-12-09 23:24:56.994250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:16.461 [2024-12-09 23:24:56.994288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:16.461 [2024-12-09 23:24:56.994304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:16.461 [2024-12-09 23:24:56.994412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:16.461 [2024-12-09 23:24:56.994424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:16.461 [2024-12-09 23:24:56.994435] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:16.461 [2024-12-09 23:24:56.994448] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994466] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:16.461 [2024-12-09 23:24:56.994474] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:16.461 [2024-12-09 23:24:56.994482] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:16.461 [2024-12-09 23:24:56.994490] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:16.461 [2024-12-09 23:24:56.994498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.994506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:16.461 [2024-12-09 23:24:56.994515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:43:16.461 [2024-12-09 23:24:56.994523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.994611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.461 [2024-12-09 23:24:56.994633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:16.461 [2024-12-09 23:24:56.994641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:43:16.461 [2024-12-09 23:24:56.994649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.461 [2024-12-09 23:24:56.994759] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:16.461 [2024-12-09 23:24:56.994776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:16.461 [2024-12-09 23:24:56.994786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:16.461 [2024-12-09 23:24:56.994811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:16.461 [2024-12-09 23:24:56.994833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:16.461 [2024-12-09 23:24:56.994848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:16.461 [2024-12-09 23:24:56.994862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:16.461 [2024-12-09 23:24:56.994870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:16.461 [2024-12-09 23:24:56.994877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:16.461 [2024-12-09 23:24:56.994887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:16.461 [2024-12-09 23:24:56.994894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:16.461 [2024-12-09 23:24:56.994908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:16.461 [2024-12-09 23:24:56.994928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:16.461 [2024-12-09 23:24:56.994949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:16.461 [2024-12-09 23:24:56.994969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:16.461 [2024-12-09 23:24:56.994975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:16.461 [2024-12-09 23:24:56.994999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:16.461 [2024-12-09 23:24:56.995007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:16.461 [2024-12-09 23:24:56.995013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:16.461 [2024-12-09 23:24:56.995020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:16.461 [2024-12-09 23:24:56.995027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:16.461 [2024-12-09 23:24:56.995034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:16.461 [2024-12-09 23:24:56.995041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:16.461 [2024-12-09 23:24:56.995047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:16.461 [2024-12-09 23:24:56.995054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:16.461 [2024-12-09 23:24:56.995062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:16.461 [2024-12-09 23:24:56.995069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:16.461 [2024-12-09 23:24:56.995076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.995083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:16.461 [2024-12-09 23:24:56.995089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:16.461 [2024-12-09 23:24:56.995095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.995103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:16.461 [2024-12-09 23:24:56.995112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:16.461 [2024-12-09 23:24:56.995122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:16.461 [2024-12-09 23:24:56.995131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:16.461 [2024-12-09 23:24:56.995139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:16.461 [2024-12-09 23:24:56.995147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:16.461 [2024-12-09 23:24:56.995154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:16.461 [2024-12-09 23:24:56.995162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:16.461 [2024-12-09 23:24:56.995168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:16.461 [2024-12-09 23:24:56.995176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:16.461 [2024-12-09 23:24:56.995185] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:16.461 [2024-12-09 23:24:56.995195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:16.461 [2024-12-09 23:24:56.995204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:16.461 [2024-12-09 23:24:56.995212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:16.461 [2024-12-09 23:24:56.995219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:16.461 [2024-12-09 23:24:56.995227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:16.461 [2024-12-09 23:24:56.995234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:16.461 [2024-12-09 23:24:56.995241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:16.461 [2024-12-09 23:24:56.995248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:16.461 [2024-12-09 23:24:56.995255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:16.461 [2024-12-09 23:24:56.995262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:16.461 [2024-12-09 23:24:56.995269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:16.462 [2024-12-09 23:24:56.995307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:16.462 [2024-12-09 23:24:56.995316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:16.462 [2024-12-09 23:24:56.995331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:16.462 [2024-12-09 23:24:56.995338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:16.462 [2024-12-09 23:24:56.995345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:16.462 [2024-12-09 23:24:56.995353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:56.995365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:16.462 [2024-12-09 23:24:56.995375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:43:16.462 [2024-12-09 23:24:56.995383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.028694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.028905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:16.462 [2024-12-09 23:24:57.029419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.251 ms 00:43:16.462 [2024-12-09 23:24:57.029477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.029749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.029854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:16.462 [2024-12-09 23:24:57.029945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:43:16.462 [2024-12-09 23:24:57.029975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.083152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.083376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:16.462 [2024-12-09 23:24:57.083407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.104 ms 00:43:16.462 [2024-12-09 23:24:57.083417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.083535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.083549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:16.462 [2024-12-09 23:24:57.083558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:16.462 [2024-12-09 23:24:57.083566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.084185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.084210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:16.462 [2024-12-09 23:24:57.084231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:43:16.462 [2024-12-09 23:24:57.084240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.462 [2024-12-09 23:24:57.084402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.462 [2024-12-09 23:24:57.084426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:16.462 [2024-12-09 23:24:57.084436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:43:16.462 [2024-12-09 23:24:57.084444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.101331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.101383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:16.724 [2024-12-09 23:24:57.101395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.864 ms 00:43:16.724 [2024-12-09 23:24:57.101404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.116362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:43:16.724 [2024-12-09 23:24:57.116420] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:16.724 [2024-12-09 23:24:57.116434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.116443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:16.724 [2024-12-09 23:24:57.116454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.908 ms 00:43:16.724 [2024-12-09 23:24:57.116461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.143050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.143104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:16.724 [2024-12-09 23:24:57.143117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.485 ms 00:43:16.724 [2024-12-09 23:24:57.143124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.156527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.156577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:16.724 [2024-12-09 23:24:57.156590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.298 ms 00:43:16.724 [2024-12-09 23:24:57.156598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.169798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.169850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:16.724 [2024-12-09 23:24:57.169864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:43:16.724 [2024-12-09 23:24:57.169873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.170606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.170644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:16.724 [2024-12-09 23:24:57.170656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:43:16.724 [2024-12-09 23:24:57.170665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.238587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.238650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:16.724 [2024-12-09 23:24:57.238665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.895 ms 00:43:16.724 [2024-12-09 23:24:57.238674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.250188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:16.724 [2024-12-09 23:24:57.270882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.270940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:16.724 [2024-12-09 23:24:57.270954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.102 ms 00:43:16.724 [2024-12-09 23:24:57.270963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.271093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.271106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:16.724 [2024-12-09 23:24:57.271117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:43:16.724 [2024-12-09 23:24:57.271127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.271186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.271197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:16.724 [2024-12-09 23:24:57.271206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:43:16.724 [2024-12-09 23:24:57.271214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.271248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.271260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:16.724 [2024-12-09 23:24:57.271269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:43:16.724 [2024-12-09 23:24:57.271278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.271318] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:16.724 [2024-12-09 23:24:57.271329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.271338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:16.724 [2024-12-09 23:24:57.271346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:43:16.724 [2024-12-09 23:24:57.271355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.298277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.298350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:16.724 [2024-12-09 23:24:57.298364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.898 ms 00:43:16.724 [2024-12-09 23:24:57.298373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.298509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:16.724 [2024-12-09 23:24:57.298520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:16.724 [2024-12-09 23:24:57.298530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:43:16.724 [2024-12-09 23:24:57.298539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:16.724 [2024-12-09 23:24:57.299659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:16.724 [2024-12-09 23:24:57.303412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.803 ms, result 0 00:43:16.724 [2024-12-09 23:24:57.304675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:16.724 [2024-12-09 23:24:57.318661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:18.110  [2024-12-09T23:24:59.687Z] Copying: 11/256 [MB] (11 MBps) [2024-12-09T23:25:00.628Z] Copying: 30/256 [MB] (19 MBps) [2024-12-09T23:25:01.567Z] Copying: 52/256 [MB] (21 MBps) [2024-12-09T23:25:02.511Z] Copying: 74/256 [MB] (22 MBps) [2024-12-09T23:25:03.452Z] Copying: 99/256 [MB] (24 MBps) [2024-12-09T23:25:04.395Z] Copying: 117/256 [MB] (18 MBps) [2024-12-09T23:25:05.336Z] Copying: 128/256 [MB] (11 MBps) [2024-12-09T23:25:06.722Z] Copying: 149/256 [MB] (20 MBps) [2024-12-09T23:25:07.668Z] Copying: 164/256 [MB] (15 MBps) [2024-12-09T23:25:08.611Z] Copying: 182/256 [MB] (17 MBps) [2024-12-09T23:25:09.555Z] Copying: 198/256 [MB] (16 MBps) [2024-12-09T23:25:10.497Z] Copying: 214/256 [MB] (15 MBps) [2024-12-09T23:25:11.439Z] Copying: 227/256 [MB] (13 MBps) [2024-12-09T23:25:12.383Z] Copying: 243/256 [MB] (15 MBps) [2024-12-09T23:25:12.383Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-09 23:25:12.175791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:31.747 [2024-12-09 23:25:12.186294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.186355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:31.747 [2024-12-09 23:25:12.186372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:31.747 [2024-12-09 23:25:12.186391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.186418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:31.747 [2024-12-09 23:25:12.189439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.189677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:31.747 [2024-12-09 23:25:12.189702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:43:31.747 [2024-12-09 23:25:12.189712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.193184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.193236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:31.747 [2024-12-09 23:25:12.193248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:43:31.747 [2024-12-09 23:25:12.193256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.201503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.201563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:31.747 [2024-12-09 23:25:12.201575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.227 ms 00:43:31.747 [2024-12-09 23:25:12.201583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.208775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.208972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:31.747 [2024-12-09 23:25:12.209008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.144 ms 00:43:31.747 [2024-12-09 23:25:12.209016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.235424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.235476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:31.747 [2024-12-09 23:25:12.235490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.336 ms 00:43:31.747 [2024-12-09 23:25:12.235497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.252779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.253017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:31.747 [2024-12-09 23:25:12.253048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.212 ms 00:43:31.747 [2024-12-09 23:25:12.253057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.253212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.253223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:31.747 [2024-12-09 23:25:12.253233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:43:31.747 [2024-12-09 23:25:12.253249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.280317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.280520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:31.747 [2024-12-09 23:25:12.280542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.050 ms 00:43:31.747 [2024-12-09 23:25:12.280551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.306479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.306530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:31.747 [2024-12-09 23:25:12.306544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.866 ms 00:43:31.747 [2024-12-09 23:25:12.306552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.331999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.332051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:31.747 [2024-12-09 23:25:12.332065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.378 ms 00:43:31.747 [2024-12-09 23:25:12.332073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.358168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.747 [2024-12-09 23:25:12.358219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:31.747 [2024-12-09 23:25:12.358232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.990 ms 00:43:31.747 [2024-12-09 23:25:12.358240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.747 [2024-12-09 23:25:12.358309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:31.747 [2024-12-09 23:25:12.358327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:31.747 [2024-12-09 23:25:12.358458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.358979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:31.748 [2024-12-09 23:25:12.359175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:31.748 [2024-12-09 23:25:12.359184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:31.748 [2024-12-09 23:25:12.359193] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:31.748 [2024-12-09 23:25:12.359201] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:31.748 [2024-12-09 23:25:12.359209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:31.748 [2024-12-09 23:25:12.359217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:31.748 [2024-12-09 23:25:12.359225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:31.748 [2024-12-09 23:25:12.359233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:31.748 [2024-12-09 23:25:12.359241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:31.748 [2024-12-09 23:25:12.359248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:31.749 [2024-12-09 23:25:12.359255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:31.749 [2024-12-09 23:25:12.359263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.749 [2024-12-09 23:25:12.359274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:31.749 [2024-12-09 23:25:12.359283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:43:31.749 [2024-12-09 23:25:12.359291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.749 [2024-12-09 23:25:12.373112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.749 [2024-12-09 23:25:12.373156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:31.749 [2024-12-09 23:25:12.373169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.797 ms 00:43:31.749 [2024-12-09 23:25:12.373177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.749 [2024-12-09 23:25:12.373597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.749 [2024-12-09 23:25:12.373609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:31.749 [2024-12-09 23:25:12.373619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:43:31.749 [2024-12-09 23:25:12.373627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.412921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.412993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:32.010 [2024-12-09 23:25:12.413007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.413017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.413147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.413159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:32.010 [2024-12-09 23:25:12.413167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.413176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.413238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.413248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:32.010 [2024-12-09 23:25:12.413256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.413265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.413284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.413296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:32.010 [2024-12-09 23:25:12.413304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.413312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.500453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.500514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:32.010 [2024-12-09 23:25:12.500528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.500536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:32.010 [2024-12-09 23:25:12.572104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:32.010 [2024-12-09 23:25:12.572220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:32.010 [2024-12-09 23:25:12.572288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:32.010 [2024-12-09 23:25:12.572417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:32.010 [2024-12-09 23:25:12.572480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:32.010 [2024-12-09 23:25:12.572557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:32.010 [2024-12-09 23:25:12.572628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:32.010 [2024-12-09 23:25:12.572640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:32.010 [2024-12-09 23:25:12.572648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.010 [2024-12-09 23:25:12.572807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.503 ms, result 0 00:43:32.953 00:43:32.953 00:43:33.213 23:25:13 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76884 00:43:33.213 23:25:13 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:43:33.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:33.213 23:25:13 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76884 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76884 ']' 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:33.213 23:25:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:43:33.213 [2024-12-09 23:25:13.689370] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:33.213 [2024-12-09 23:25:13.689528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76884 ] 00:43:33.474 [2024-12-09 23:25:13.851868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.474 [2024-12-09 23:25:13.985902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:34.418 23:25:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:34.418 23:25:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:43:34.418 23:25:14 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:43:34.418 [2024-12-09 23:25:14.922747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:34.418 [2024-12-09 23:25:14.922840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:34.680 [2024-12-09 23:25:15.081845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.081923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:34.680 [2024-12-09 23:25:15.081941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:34.680 [2024-12-09 23:25:15.081951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.085061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.085113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:34.680 [2024-12-09 23:25:15.085128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:43:34.680 [2024-12-09 23:25:15.085137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.085287] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:34.680 [2024-12-09 23:25:15.086115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:34.680 [2024-12-09 23:25:15.086155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.086164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:34.680 [2024-12-09 23:25:15.086177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:43:34.680 [2024-12-09 23:25:15.086185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.088059] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:34.680 [2024-12-09 23:25:15.102670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.102739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:34.680 [2024-12-09 23:25:15.102755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.617 ms 00:43:34.680 [2024-12-09 23:25:15.102766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.102900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.102916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:34.680 [2024-12-09 23:25:15.102929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:43:34.680 [2024-12-09 23:25:15.102939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.112017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.112077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:34.680 [2024-12-09 23:25:15.112089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.002 ms 00:43:34.680 [2024-12-09 23:25:15.112100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.112232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.112245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:34.680 [2024-12-09 23:25:15.112255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:43:34.680 [2024-12-09 23:25:15.112270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.112298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.680 [2024-12-09 23:25:15.112308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:34.680 [2024-12-09 23:25:15.112316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:34.680 [2024-12-09 23:25:15.112326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.680 [2024-12-09 23:25:15.112352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:34.681 [2024-12-09 23:25:15.116507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.681 [2024-12-09 23:25:15.116574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:34.681 [2024-12-09 23:25:15.116589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.159 ms 00:43:34.681 [2024-12-09 23:25:15.116597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.681 [2024-12-09 23:25:15.116686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.681 [2024-12-09 23:25:15.116697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:34.681 [2024-12-09 23:25:15.116709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:43:34.681 [2024-12-09 23:25:15.116730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.681 [2024-12-09 23:25:15.116754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:34.681 [2024-12-09 23:25:15.116779] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:34.681 [2024-12-09 23:25:15.116828] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:34.681 [2024-12-09 23:25:15.116845] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:34.681 [2024-12-09 23:25:15.116954] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:34.681 [2024-12-09 23:25:15.116966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:34.681 [2024-12-09 23:25:15.116998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:34.681 [2024-12-09 23:25:15.117009] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117029] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:34.681 [2024-12-09 23:25:15.117040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:34.681 [2024-12-09 23:25:15.117047] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:34.681 [2024-12-09 23:25:15.117059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:34.681 [2024-12-09 23:25:15.117068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.681 [2024-12-09 23:25:15.117078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:34.681 [2024-12-09 23:25:15.117086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:43:34.681 [2024-12-09 23:25:15.117097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.681 [2024-12-09 23:25:15.117186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.681 [2024-12-09 23:25:15.117208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:34.681 [2024-12-09 23:25:15.117215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:43:34.681 [2024-12-09 23:25:15.117225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.681 [2024-12-09 23:25:15.117330] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:34.681 [2024-12-09 23:25:15.117342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:34.681 [2024-12-09 23:25:15.117352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:34.681 [2024-12-09 23:25:15.117379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:34.681 [2024-12-09 23:25:15.117405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:34.681 [2024-12-09 23:25:15.117421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:34.681 [2024-12-09 23:25:15.117430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:34.681 [2024-12-09 23:25:15.117437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:34.681 [2024-12-09 23:25:15.117445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:34.681 [2024-12-09 23:25:15.117452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:34.681 [2024-12-09 23:25:15.117463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:34.681 [2024-12-09 23:25:15.117480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:34.681 [2024-12-09 23:25:15.117510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:34.681 [2024-12-09 23:25:15.117536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:34.681 [2024-12-09 23:25:15.117559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:34.681 [2024-12-09 23:25:15.117584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:34.681 [2024-12-09 23:25:15.117605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:34.681 [2024-12-09 23:25:15.117622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:34.681 [2024-12-09 23:25:15.117631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:34.681 [2024-12-09 23:25:15.117668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:34.681 [2024-12-09 23:25:15.117678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:34.681 [2024-12-09 23:25:15.117685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:34.681 [2024-12-09 23:25:15.117696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:34.681 [2024-12-09 23:25:15.117713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:34.681 [2024-12-09 23:25:15.117720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117729] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:34.681 [2024-12-09 23:25:15.117739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:34.681 [2024-12-09 23:25:15.117750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:34.681 [2024-12-09 23:25:15.117769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:34.681 [2024-12-09 23:25:15.117778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:34.681 [2024-12-09 23:25:15.117787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:34.681 [2024-12-09 23:25:15.117794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:34.681 [2024-12-09 23:25:15.117803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:34.681 [2024-12-09 23:25:15.117811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:34.681 [2024-12-09 23:25:15.117821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:34.681 [2024-12-09 23:25:15.117831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:34.681 [2024-12-09 23:25:15.117845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:34.681 [2024-12-09 23:25:15.117852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:34.681 [2024-12-09 23:25:15.117863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:34.681 [2024-12-09 23:25:15.117870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:34.681 [2024-12-09 23:25:15.117879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:34.681 [2024-12-09 23:25:15.117887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:34.681 [2024-12-09 23:25:15.117895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:34.681 [2024-12-09 23:25:15.117903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:34.681 [2024-12-09 23:25:15.117912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:34.681 [2024-12-09 23:25:15.117920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:34.681 [2024-12-09 23:25:15.117929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:34.681 [2024-12-09 23:25:15.117937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:34.681 [2024-12-09 23:25:15.117946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:34.681 [2024-12-09 23:25:15.117953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:34.681 [2024-12-09 23:25:15.117963] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:34.682 [2024-12-09 23:25:15.117972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:34.682 [2024-12-09 23:25:15.118000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:34.682 [2024-12-09 23:25:15.118008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:34.682 [2024-12-09 23:25:15.118018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:34.682 [2024-12-09 23:25:15.118025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:34.682 [2024-12-09 23:25:15.118036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.118044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:34.682 [2024-12-09 23:25:15.118055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:43:34.682 [2024-12-09 23:25:15.118065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.151455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.151513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:34.682 [2024-12-09 23:25:15.151528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.318 ms 00:43:34.682 [2024-12-09 23:25:15.151540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.151685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.151697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:34.682 [2024-12-09 23:25:15.151707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:43:34.682 [2024-12-09 23:25:15.151715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.187586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.187639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:34.682 [2024-12-09 23:25:15.187653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.841 ms 00:43:34.682 [2024-12-09 23:25:15.187662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.187762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.187772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:34.682 [2024-12-09 23:25:15.187783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:34.682 [2024-12-09 23:25:15.187791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.188413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.188449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:34.682 [2024-12-09 23:25:15.188463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:43:34.682 [2024-12-09 23:25:15.188471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.188630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.188639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:34.682 [2024-12-09 23:25:15.188650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:43:34.682 [2024-12-09 23:25:15.188657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.207461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.207511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:34.682 [2024-12-09 23:25:15.207525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.776 ms 00:43:34.682 [2024-12-09 23:25:15.207534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.229825] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:43:34.682 [2024-12-09 23:25:15.229885] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:34.682 [2024-12-09 23:25:15.229905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.229914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:34.682 [2024-12-09 23:25:15.229928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.243 ms 00:43:34.682 [2024-12-09 23:25:15.229944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.256470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.256527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:34.682 [2024-12-09 23:25:15.256544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.390 ms 00:43:34.682 [2024-12-09 23:25:15.256552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.270046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.270097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:34.682 [2024-12-09 23:25:15.270115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.383 ms 00:43:34.682 [2024-12-09 23:25:15.270123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.283339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.283548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:34.682 [2024-12-09 23:25:15.283577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.116 ms 00:43:34.682 [2024-12-09 23:25:15.283584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.682 [2024-12-09 23:25:15.284283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.682 [2024-12-09 23:25:15.284311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:34.682 [2024-12-09 23:25:15.284325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:43:34.682 [2024-12-09 23:25:15.284333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.352373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.352439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:34.943 [2024-12-09 23:25:15.352458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.006 ms 00:43:34.943 [2024-12-09 23:25:15.352467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.364623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:34.943 [2024-12-09 23:25:15.385144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.385207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:34.943 [2024-12-09 23:25:15.385224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.561 ms 00:43:34.943 [2024-12-09 23:25:15.385235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.385330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.385344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:34.943 [2024-12-09 23:25:15.385354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:43:34.943 [2024-12-09 23:25:15.385365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.385420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.385431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:34.943 [2024-12-09 23:25:15.385440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:43:34.943 [2024-12-09 23:25:15.385454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.385480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.385490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:34.943 [2024-12-09 23:25:15.385499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:34.943 [2024-12-09 23:25:15.385511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.385549] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:34.943 [2024-12-09 23:25:15.385565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.385578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:34.943 [2024-12-09 23:25:15.385589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:43:34.943 [2024-12-09 23:25:15.385596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.412721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.412777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:34.943 [2024-12-09 23:25:15.412794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.090 ms 00:43:34.943 [2024-12-09 23:25:15.412803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.412939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:34.943 [2024-12-09 23:25:15.412952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:34.943 [2024-12-09 23:25:15.412964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:43:34.943 [2024-12-09 23:25:15.412975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:34.943 [2024-12-09 23:25:15.414340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:34.943 [2024-12-09 23:25:15.417934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.141 ms, result 0 00:43:34.943 [2024-12-09 23:25:15.420508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:34.943 Some configs were skipped because the RPC state that can call them passed over. 00:43:34.943 23:25:15 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:43:35.203 [2024-12-09 23:25:15.661718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:35.203 [2024-12-09 23:25:15.661811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:43:35.203 [2024-12-09 23:25:15.661828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:43:35.203 [2024-12-09 23:25:15.661841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:35.203 [2024-12-09 23:25:15.661881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.096 ms, result 0 00:43:35.203 true 00:43:35.203 23:25:15 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:43:35.463 [2024-12-09 23:25:15.885864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:35.463 [2024-12-09 23:25:15.886092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:43:35.463 [2024-12-09 23:25:15.886123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:43:35.463 [2024-12-09 23:25:15.886132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:35.463 [2024-12-09 23:25:15.886185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.093 ms, result 0 00:43:35.463 true 00:43:35.463 23:25:15 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76884 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76884 ']' 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76884 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76884 00:43:35.463 killing process with pid 76884 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76884' 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76884 00:43:35.463 23:25:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76884 00:43:36.081 [2024-12-09 23:25:16.690436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.081 [2024-12-09 23:25:16.690524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:36.081 [2024-12-09 23:25:16.690540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:36.081 [2024-12-09 23:25:16.690550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.081 [2024-12-09 23:25:16.690578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:36.081 [2024-12-09 23:25:16.693771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.081 [2024-12-09 23:25:16.694008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:36.081 [2024-12-09 23:25:16.694041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:43:36.081 [2024-12-09 23:25:16.694050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.081 [2024-12-09 23:25:16.694376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.081 [2024-12-09 23:25:16.694388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:36.081 [2024-12-09 23:25:16.694400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:43:36.081 [2024-12-09 23:25:16.694408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.081 [2024-12-09 23:25:16.699197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.081 [2024-12-09 23:25:16.699244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:36.081 [2024-12-09 23:25:16.699260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:43:36.081 [2024-12-09 23:25:16.699268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.706210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.706257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:36.347 [2024-12-09 23:25:16.706276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.888 ms 00:43:36.347 [2024-12-09 23:25:16.706284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.717879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.718119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:36.347 [2024-12-09 23:25:16.718149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.520 ms 00:43:36.347 [2024-12-09 23:25:16.718157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.727728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.727787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:36.347 [2024-12-09 23:25:16.727803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.475 ms 00:43:36.347 [2024-12-09 23:25:16.727812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.728005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.728019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:36.347 [2024-12-09 23:25:16.728032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:43:36.347 [2024-12-09 23:25:16.728041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.739815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.739865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:36.347 [2024-12-09 23:25:16.739879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.746 ms 00:43:36.347 [2024-12-09 23:25:16.739888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.751344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.751394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:36.347 [2024-12-09 23:25:16.751415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.394 ms 00:43:36.347 [2024-12-09 23:25:16.751423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.347 [2024-12-09 23:25:16.761824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.347 [2024-12-09 23:25:16.762039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:36.347 [2024-12-09 23:25:16.762066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.340 ms 00:43:36.348 [2024-12-09 23:25:16.762074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.348 [2024-12-09 23:25:16.772767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.348 [2024-12-09 23:25:16.772818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:36.348 [2024-12-09 23:25:16.772832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.514 ms 00:43:36.348 [2024-12-09 23:25:16.772840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.348 [2024-12-09 23:25:16.772893] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:36.348 [2024-12-09 23:25:16.772909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.772980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:36.348 [2024-12-09 23:25:16.773772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:36.349 [2024-12-09 23:25:16.773919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:36.349 [2024-12-09 23:25:16.773935] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:36.349 [2024-12-09 23:25:16.773946] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:36.349 [2024-12-09 23:25:16.773957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:36.349 [2024-12-09 23:25:16.773970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:36.349 [2024-12-09 23:25:16.773992] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:36.349 [2024-12-09 23:25:16.774000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:36.349 [2024-12-09 23:25:16.774010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:36.349 [2024-12-09 23:25:16.774018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:36.349 [2024-12-09 23:25:16.774026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:36.349 [2024-12-09 23:25:16.774032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:36.349 [2024-12-09 23:25:16.774042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.349 [2024-12-09 23:25:16.774050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:36.349 [2024-12-09 23:25:16.774061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:43:36.349 [2024-12-09 23:25:16.774069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.787884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.349 [2024-12-09 23:25:16.787930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:36.349 [2024-12-09 23:25:16.787947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.767 ms 00:43:36.349 [2024-12-09 23:25:16.787956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.788438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:36.349 [2024-12-09 23:25:16.788459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:36.349 [2024-12-09 23:25:16.788474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:43:36.349 [2024-12-09 23:25:16.788483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.838092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.349 [2024-12-09 23:25:16.838145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:36.349 [2024-12-09 23:25:16.838160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.349 [2024-12-09 23:25:16.838170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.838285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.349 [2024-12-09 23:25:16.838297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:36.349 [2024-12-09 23:25:16.838312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.349 [2024-12-09 23:25:16.838321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.838378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.349 [2024-12-09 23:25:16.838389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:36.349 [2024-12-09 23:25:16.838403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.349 [2024-12-09 23:25:16.838412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.838433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.349 [2024-12-09 23:25:16.838443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:36.349 [2024-12-09 23:25:16.838454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.349 [2024-12-09 23:25:16.838465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.349 [2024-12-09 23:25:16.924054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.349 [2024-12-09 23:25:16.924117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:36.349 [2024-12-09 23:25:16.924135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.349 [2024-12-09 23:25:16.924145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.610 [2024-12-09 23:25:16.996192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:36.611 [2024-12-09 23:25:16.996267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:36.611 [2024-12-09 23:25:16.996401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:36.611 [2024-12-09 23:25:16.996464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:36.611 [2024-12-09 23:25:16.996604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:36.611 [2024-12-09 23:25:16.996670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:36.611 [2024-12-09 23:25:16.996748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.996808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:36.611 [2024-12-09 23:25:16.996818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:36.611 [2024-12-09 23:25:16.996829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:36.611 [2024-12-09 23:25:16.996837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:36.611 [2024-12-09 23:25:16.997027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 306.527 ms, result 0 00:43:37.180 23:25:17 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:43:37.180 23:25:17 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:37.180 [2024-12-09 23:25:17.667040] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:37.180 [2024-12-09 23:25:17.667157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76937 ] 00:43:37.441 [2024-12-09 23:25:17.823595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:37.441 [2024-12-09 23:25:17.905994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.704 [2024-12-09 23:25:18.123606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:37.704 [2024-12-09 23:25:18.123662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:37.704 [2024-12-09 23:25:18.282399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.282452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:37.704 [2024-12-09 23:25:18.282466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:37.704 [2024-12-09 23:25:18.282474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.285424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.285465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:37.704 [2024-12-09 23:25:18.285476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:43:37.704 [2024-12-09 23:25:18.285484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.285591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:37.704 [2024-12-09 23:25:18.286750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:37.704 [2024-12-09 23:25:18.286793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.286805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:37.704 [2024-12-09 23:25:18.286814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:43:37.704 [2024-12-09 23:25:18.286822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.287951] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:37.704 [2024-12-09 23:25:18.300669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.300703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:37.704 [2024-12-09 23:25:18.300716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:43:37.704 [2024-12-09 23:25:18.300725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.300817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.300828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:37.704 [2024-12-09 23:25:18.300837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:43:37.704 [2024-12-09 23:25:18.300843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.305873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.305902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:37.704 [2024-12-09 23:25:18.305912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:43:37.704 [2024-12-09 23:25:18.305921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.306031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.306041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:37.704 [2024-12-09 23:25:18.306050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:43:37.704 [2024-12-09 23:25:18.306057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.306084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.306092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:37.704 [2024-12-09 23:25:18.306100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:37.704 [2024-12-09 23:25:18.306107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.306128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:37.704 [2024-12-09 23:25:18.309428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.309453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:37.704 [2024-12-09 23:25:18.309462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:43:37.704 [2024-12-09 23:25:18.309470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.309507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.309515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:37.704 [2024-12-09 23:25:18.309523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:37.704 [2024-12-09 23:25:18.309530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.309550] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:37.704 [2024-12-09 23:25:18.309568] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:37.704 [2024-12-09 23:25:18.309601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:37.704 [2024-12-09 23:25:18.309617] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:37.704 [2024-12-09 23:25:18.309728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:37.704 [2024-12-09 23:25:18.309738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:37.704 [2024-12-09 23:25:18.309748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:37.704 [2024-12-09 23:25:18.309760] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:37.704 [2024-12-09 23:25:18.309769] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:37.704 [2024-12-09 23:25:18.309777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:37.704 [2024-12-09 23:25:18.309784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:37.704 [2024-12-09 23:25:18.309791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:37.704 [2024-12-09 23:25:18.309798] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:37.704 [2024-12-09 23:25:18.309806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.309813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:37.704 [2024-12-09 23:25:18.309821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:43:37.704 [2024-12-09 23:25:18.309828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.309915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.704 [2024-12-09 23:25:18.309925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:37.704 [2024-12-09 23:25:18.309933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:43:37.704 [2024-12-09 23:25:18.309940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.704 [2024-12-09 23:25:18.310076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:37.704 [2024-12-09 23:25:18.310088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:37.704 [2024-12-09 23:25:18.310097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:37.704 [2024-12-09 23:25:18.310119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:37.704 [2024-12-09 23:25:18.310142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:37.704 [2024-12-09 23:25:18.310155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:37.704 [2024-12-09 23:25:18.310168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:37.704 [2024-12-09 23:25:18.310174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:37.704 [2024-12-09 23:25:18.310181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:37.704 [2024-12-09 23:25:18.310188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:37.704 [2024-12-09 23:25:18.310195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:37.704 [2024-12-09 23:25:18.310210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:37.704 [2024-12-09 23:25:18.310231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:37.704 [2024-12-09 23:25:18.310251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:37.704 [2024-12-09 23:25:18.310273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:37.704 [2024-12-09 23:25:18.310280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:37.704 [2024-12-09 23:25:18.310287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:37.705 [2024-12-09 23:25:18.310293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:37.705 [2024-12-09 23:25:18.310299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:37.705 [2024-12-09 23:25:18.310305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:37.705 [2024-12-09 23:25:18.310312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:37.705 [2024-12-09 23:25:18.310318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:37.705 [2024-12-09 23:25:18.310325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:37.705 [2024-12-09 23:25:18.310331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:37.705 [2024-12-09 23:25:18.310337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:37.705 [2024-12-09 23:25:18.310344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:37.705 [2024-12-09 23:25:18.310351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:37.705 [2024-12-09 23:25:18.310357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.705 [2024-12-09 23:25:18.310364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:37.705 [2024-12-09 23:25:18.310370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:37.705 [2024-12-09 23:25:18.310376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.705 [2024-12-09 23:25:18.310382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:37.705 [2024-12-09 23:25:18.310390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:37.705 [2024-12-09 23:25:18.310399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:37.705 [2024-12-09 23:25:18.310406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:37.705 [2024-12-09 23:25:18.310413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:37.705 [2024-12-09 23:25:18.310420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:37.705 [2024-12-09 23:25:18.310427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:37.705 [2024-12-09 23:25:18.310434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:37.705 [2024-12-09 23:25:18.310440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:37.705 [2024-12-09 23:25:18.310447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:37.705 [2024-12-09 23:25:18.310455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:37.705 [2024-12-09 23:25:18.310464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:37.705 [2024-12-09 23:25:18.310479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:37.705 [2024-12-09 23:25:18.310486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:37.705 [2024-12-09 23:25:18.310492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:37.705 [2024-12-09 23:25:18.310499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:37.705 [2024-12-09 23:25:18.310506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:37.705 [2024-12-09 23:25:18.310514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:37.705 [2024-12-09 23:25:18.310520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:37.705 [2024-12-09 23:25:18.310527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:37.705 [2024-12-09 23:25:18.310534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:37.705 [2024-12-09 23:25:18.310569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:37.705 [2024-12-09 23:25:18.310577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:37.705 [2024-12-09 23:25:18.310593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:37.705 [2024-12-09 23:25:18.310600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:37.705 [2024-12-09 23:25:18.310607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:37.705 [2024-12-09 23:25:18.310615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.705 [2024-12-09 23:25:18.310625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:37.705 [2024-12-09 23:25:18.310632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:43:37.705 [2024-12-09 23:25:18.310640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.705 [2024-12-09 23:25:18.337012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.705 [2024-12-09 23:25:18.337049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:37.705 [2024-12-09 23:25:18.337060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.320 ms 00:43:37.705 [2024-12-09 23:25:18.337067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.705 [2024-12-09 23:25:18.337194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.705 [2024-12-09 23:25:18.337204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:37.705 [2024-12-09 23:25:18.337213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:43:37.705 [2024-12-09 23:25:18.337220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.378642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.378833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:37.966 [2024-12-09 23:25:18.378856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.400 ms 00:43:37.966 [2024-12-09 23:25:18.378865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.378969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.378981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:37.966 [2024-12-09 23:25:18.379007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:37.966 [2024-12-09 23:25:18.379014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.379334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.379349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:37.966 [2024-12-09 23:25:18.379364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:43:37.966 [2024-12-09 23:25:18.379371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.379495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.379504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:37.966 [2024-12-09 23:25:18.379513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:43:37.966 [2024-12-09 23:25:18.379520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.393393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.393540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:37.966 [2024-12-09 23:25:18.393559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.851 ms 00:43:37.966 [2024-12-09 23:25:18.393570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.406586] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:43:37.966 [2024-12-09 23:25:18.406620] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:37.966 [2024-12-09 23:25:18.406631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.406639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:37.966 [2024-12-09 23:25:18.406648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.910 ms 00:43:37.966 [2024-12-09 23:25:18.406655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.430508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.430541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:37.966 [2024-12-09 23:25:18.430552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.783 ms 00:43:37.966 [2024-12-09 23:25:18.430561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.442237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.442281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:37.966 [2024-12-09 23:25:18.442291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.607 ms 00:43:37.966 [2024-12-09 23:25:18.442298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.453524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.453554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:37.966 [2024-12-09 23:25:18.453564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.164 ms 00:43:37.966 [2024-12-09 23:25:18.453571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.454213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.454231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:37.966 [2024-12-09 23:25:18.454240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:43:37.966 [2024-12-09 23:25:18.454247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.509065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.509111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:37.966 [2024-12-09 23:25:18.509124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.794 ms 00:43:37.966 [2024-12-09 23:25:18.509131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.519343] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:37.966 [2024-12-09 23:25:18.533107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.533141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:37.966 [2024-12-09 23:25:18.533154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.887 ms 00:43:37.966 [2024-12-09 23:25:18.533167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.533240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.533251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:37.966 [2024-12-09 23:25:18.533259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:37.966 [2024-12-09 23:25:18.533267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.533313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.533321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:37.966 [2024-12-09 23:25:18.533329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:43:37.966 [2024-12-09 23:25:18.533340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.533372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.533381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:37.966 [2024-12-09 23:25:18.533388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:37.966 [2024-12-09 23:25:18.533395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.533423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:37.966 [2024-12-09 23:25:18.533433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.533441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:37.966 [2024-12-09 23:25:18.533448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:37.966 [2024-12-09 23:25:18.533456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.557391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.557425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:37.966 [2024-12-09 23:25:18.557437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.913 ms 00:43:37.966 [2024-12-09 23:25:18.557446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.557528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:37.966 [2024-12-09 23:25:18.557538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:37.966 [2024-12-09 23:25:18.557546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:43:37.966 [2024-12-09 23:25:18.557553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:37.966 [2024-12-09 23:25:18.558382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:37.966 [2024-12-09 23:25:18.561270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.716 ms, result 0 00:43:37.966 [2024-12-09 23:25:18.562553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:37.966 [2024-12-09 23:25:18.575220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:39.351  [2024-12-09T23:25:20.930Z] Copying: 22/256 [MB] (22 MBps) [2024-12-09T23:25:21.874Z] Copying: 49/256 [MB] (27 MBps) [2024-12-09T23:25:22.816Z] Copying: 64/256 [MB] (14 MBps) [2024-12-09T23:25:23.760Z] Copying: 80/256 [MB] (16 MBps) [2024-12-09T23:25:24.703Z] Copying: 99/256 [MB] (18 MBps) [2024-12-09T23:25:25.648Z] Copying: 118/256 [MB] (18 MBps) [2024-12-09T23:25:26.588Z] Copying: 134/256 [MB] (16 MBps) [2024-12-09T23:25:27.990Z] Copying: 147/256 [MB] (12 MBps) [2024-12-09T23:25:28.933Z] Copying: 167/256 [MB] (20 MBps) [2024-12-09T23:25:29.873Z] Copying: 188/256 [MB] (20 MBps) [2024-12-09T23:25:30.822Z] Copying: 210/256 [MB] (21 MBps) [2024-12-09T23:25:31.765Z] Copying: 225/256 [MB] (15 MBps) [2024-12-09T23:25:32.711Z] Copying: 240696/262144 [kB] (10024 kBps) [2024-12-09T23:25:33.651Z] Copying: 246/256 [MB] (11 MBps) [2024-12-09T23:25:33.652Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-09 23:25:33.509350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:53.016 [2024-12-09 23:25:33.516847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.517007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:53.016 [2024-12-09 23:25:33.517032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:53.016 [2024-12-09 23:25:33.517039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.517061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:53.016 [2024-12-09 23:25:33.519322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.519349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:53.016 [2024-12-09 23:25:33.519357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:43:53.016 [2024-12-09 23:25:33.519364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.519576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.519586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:53.016 [2024-12-09 23:25:33.519593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:43:53.016 [2024-12-09 23:25:33.519599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.522383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.522409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:53.016 [2024-12-09 23:25:33.522416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:43:53.016 [2024-12-09 23:25:33.522423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.527627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.527648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:53.016 [2024-12-09 23:25:33.527655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.189 ms 00:43:53.016 [2024-12-09 23:25:33.527662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.546185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.546212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:53.016 [2024-12-09 23:25:33.546221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:43:53.016 [2024-12-09 23:25:33.546228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.559013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.559131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:53.016 [2024-12-09 23:25:33.559150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.747 ms 00:43:53.016 [2024-12-09 23:25:33.559157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.559262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.559271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:53.016 [2024-12-09 23:25:33.559285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:43:53.016 [2024-12-09 23:25:33.559291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.578010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.578105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:53.016 [2024-12-09 23:25:33.578117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.707 ms 00:43:53.016 [2024-12-09 23:25:33.578123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.597308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.597402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:53.016 [2024-12-09 23:25:33.597413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.161 ms 00:43:53.016 [2024-12-09 23:25:33.597419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.614840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.614865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:53.016 [2024-12-09 23:25:33.614872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.397 ms 00:43:53.016 [2024-12-09 23:25:33.614879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.632402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.016 [2024-12-09 23:25:33.632501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:53.016 [2024-12-09 23:25:33.632513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.476 ms 00:43:53.016 [2024-12-09 23:25:33.632518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.016 [2024-12-09 23:25:33.632543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:53.016 [2024-12-09 23:25:33.632555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:53.016 [2024-12-09 23:25:33.632812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.632999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:53.017 [2024-12-09 23:25:33.633172] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:53.017 [2024-12-09 23:25:33.633178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:53.017 [2024-12-09 23:25:33.633184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:53.017 [2024-12-09 23:25:33.633190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:53.017 [2024-12-09 23:25:33.633196] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:53.017 [2024-12-09 23:25:33.633203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:53.017 [2024-12-09 23:25:33.633209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:53.017 [2024-12-09 23:25:33.633214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:53.017 [2024-12-09 23:25:33.633222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:53.017 [2024-12-09 23:25:33.633227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:53.017 [2024-12-09 23:25:33.633231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:53.017 [2024-12-09 23:25:33.633237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.017 [2024-12-09 23:25:33.633243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:53.017 [2024-12-09 23:25:33.633249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:43:53.017 [2024-12-09 23:25:33.633255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.017 [2024-12-09 23:25:33.643285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.017 [2024-12-09 23:25:33.643381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:53.017 [2024-12-09 23:25:33.643392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.017 ms 00:43:53.017 [2024-12-09 23:25:33.643398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.017 [2024-12-09 23:25:33.643701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.017 [2024-12-09 23:25:33.643715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:53.017 [2024-12-09 23:25:33.643722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:43:53.017 [2024-12-09 23:25:33.643728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.275 [2024-12-09 23:25:33.672935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.275 [2024-12-09 23:25:33.672971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:53.275 [2024-12-09 23:25:33.672996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.673007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.673080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.673088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:53.276 [2024-12-09 23:25:33.673095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.673102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.673141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.673150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:53.276 [2024-12-09 23:25:33.673158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.673164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.673180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.673186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:53.276 [2024-12-09 23:25:33.673192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.673198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.736774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.736818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:53.276 [2024-12-09 23:25:33.736829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.736835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:53.276 [2024-12-09 23:25:33.788083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:53.276 [2024-12-09 23:25:33.788163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:53.276 [2024-12-09 23:25:33.788213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:53.276 [2024-12-09 23:25:33.788319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:53.276 [2024-12-09 23:25:33.788371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:53.276 [2024-12-09 23:25:33.788430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:53.276 [2024-12-09 23:25:33.788491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:53.276 [2024-12-09 23:25:33.788498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:53.276 [2024-12-09 23:25:33.788504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.276 [2024-12-09 23:25:33.788634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.773 ms, result 0 00:43:54.212 00:43:54.212 00:43:54.212 23:25:34 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:43:54.212 23:25:34 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:43:54.473 23:25:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:54.732 [2024-12-09 23:25:35.137030] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:54.732 [2024-12-09 23:25:35.137160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77119 ] 00:43:54.732 [2024-12-09 23:25:35.292542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.991 [2024-12-09 23:25:35.385503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:54.991 [2024-12-09 23:25:35.621896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:54.991 [2024-12-09 23:25:35.621959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:55.250 [2024-12-09 23:25:35.779005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.250 [2024-12-09 23:25:35.779046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:55.250 [2024-12-09 23:25:35.779059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:43:55.250 [2024-12-09 23:25:35.779066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.250 [2024-12-09 23:25:35.781286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.250 [2024-12-09 23:25:35.781315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:55.251 [2024-12-09 23:25:35.781323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:43:55.251 [2024-12-09 23:25:35.781330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.781392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:55.251 [2024-12-09 23:25:35.781972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:55.251 [2024-12-09 23:25:35.782002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.782010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:55.251 [2024-12-09 23:25:35.782018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:43:55.251 [2024-12-09 23:25:35.782024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.783356] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:55.251 [2024-12-09 23:25:35.793859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.793886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:55.251 [2024-12-09 23:25:35.793895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.504 ms 00:43:55.251 [2024-12-09 23:25:35.793902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.793973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.793995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:55.251 [2024-12-09 23:25:35.794002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:43:55.251 [2024-12-09 23:25:35.794008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.800215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.800239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:55.251 [2024-12-09 23:25:35.800247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:43:55.251 [2024-12-09 23:25:35.800253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.800326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.800334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:55.251 [2024-12-09 23:25:35.800341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:43:55.251 [2024-12-09 23:25:35.800348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.800367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.800375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:55.251 [2024-12-09 23:25:35.800381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:55.251 [2024-12-09 23:25:35.800387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.800407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:55.251 [2024-12-09 23:25:35.803355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.803377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:55.251 [2024-12-09 23:25:35.803384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:43:55.251 [2024-12-09 23:25:35.803390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.803423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.803430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:55.251 [2024-12-09 23:25:35.803436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:43:55.251 [2024-12-09 23:25:35.803442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.803458] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:55.251 [2024-12-09 23:25:35.803476] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:55.251 [2024-12-09 23:25:35.803504] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:55.251 [2024-12-09 23:25:35.803517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:55.251 [2024-12-09 23:25:35.803599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:55.251 [2024-12-09 23:25:35.803608] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:55.251 [2024-12-09 23:25:35.803616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:55.251 [2024-12-09 23:25:35.803626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803633] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:55.251 [2024-12-09 23:25:35.803646] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:55.251 [2024-12-09 23:25:35.803652] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:55.251 [2024-12-09 23:25:35.803658] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:55.251 [2024-12-09 23:25:35.803665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.803671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:55.251 [2024-12-09 23:25:35.803677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:43:55.251 [2024-12-09 23:25:35.803683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.803750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.251 [2024-12-09 23:25:35.803760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:55.251 [2024-12-09 23:25:35.803766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:43:55.251 [2024-12-09 23:25:35.803771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.251 [2024-12-09 23:25:35.803845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:55.251 [2024-12-09 23:25:35.803854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:55.251 [2024-12-09 23:25:35.803860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:55.251 [2024-12-09 23:25:35.803880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:55.251 [2024-12-09 23:25:35.803898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:55.251 [2024-12-09 23:25:35.803909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:55.251 [2024-12-09 23:25:35.803920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:55.251 [2024-12-09 23:25:35.803926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:55.251 [2024-12-09 23:25:35.803932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:55.251 [2024-12-09 23:25:35.803938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:55.251 [2024-12-09 23:25:35.803943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:55.251 [2024-12-09 23:25:35.803954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:55.251 [2024-12-09 23:25:35.803969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:55.251 [2024-12-09 23:25:35.803975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:55.251 [2024-12-09 23:25:35.803979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:55.251 [2024-12-09 23:25:35.803999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:55.251 [2024-12-09 23:25:35.804005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:55.251 [2024-12-09 23:25:35.804011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:55.251 [2024-12-09 23:25:35.804016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:55.251 [2024-12-09 23:25:35.804022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:55.251 [2024-12-09 23:25:35.804027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:55.251 [2024-12-09 23:25:35.804032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:55.251 [2024-12-09 23:25:35.804037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:55.251 [2024-12-09 23:25:35.804042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:55.251 [2024-12-09 23:25:35.804047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:55.251 [2024-12-09 23:25:35.804052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:55.251 [2024-12-09 23:25:35.804057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:55.251 [2024-12-09 23:25:35.804062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:55.251 [2024-12-09 23:25:35.804068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:55.251 [2024-12-09 23:25:35.804073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:55.251 [2024-12-09 23:25:35.804078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:55.251 [2024-12-09 23:25:35.804083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.251 [2024-12-09 23:25:35.804088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:55.251 [2024-12-09 23:25:35.804093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:55.252 [2024-12-09 23:25:35.804099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.252 [2024-12-09 23:25:35.804104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:55.252 [2024-12-09 23:25:35.804111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:55.252 [2024-12-09 23:25:35.804121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:55.252 [2024-12-09 23:25:35.804128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:55.252 [2024-12-09 23:25:35.804133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:55.252 [2024-12-09 23:25:35.804139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:55.252 [2024-12-09 23:25:35.804144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:55.252 [2024-12-09 23:25:35.804150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:55.252 [2024-12-09 23:25:35.804155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:55.252 [2024-12-09 23:25:35.804161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:55.252 [2024-12-09 23:25:35.804168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:55.252 [2024-12-09 23:25:35.804175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:55.252 [2024-12-09 23:25:35.804187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:55.252 [2024-12-09 23:25:35.804193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:55.252 [2024-12-09 23:25:35.804199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:55.252 [2024-12-09 23:25:35.804204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:55.252 [2024-12-09 23:25:35.804210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:55.252 [2024-12-09 23:25:35.804215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:55.252 [2024-12-09 23:25:35.804220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:55.252 [2024-12-09 23:25:35.804225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:55.252 [2024-12-09 23:25:35.804230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:55.252 [2024-12-09 23:25:35.804258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:55.252 [2024-12-09 23:25:35.804264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:55.252 [2024-12-09 23:25:35.804277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:55.252 [2024-12-09 23:25:35.804283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:55.252 [2024-12-09 23:25:35.804288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:55.252 [2024-12-09 23:25:35.804294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.804302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:55.252 [2024-12-09 23:25:35.804308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:43:55.252 [2024-12-09 23:25:35.804314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.828757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.828789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:55.252 [2024-12-09 23:25:35.828798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.383 ms 00:43:55.252 [2024-12-09 23:25:35.828805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.828905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.828912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:55.252 [2024-12-09 23:25:35.828920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:43:55.252 [2024-12-09 23:25:35.828926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.867556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.867586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:55.252 [2024-12-09 23:25:35.867598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.612 ms 00:43:55.252 [2024-12-09 23:25:35.867605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.867668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.867677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:55.252 [2024-12-09 23:25:35.867685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:55.252 [2024-12-09 23:25:35.867691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.868097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.868116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:55.252 [2024-12-09 23:25:35.868123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:43:55.252 [2024-12-09 23:25:35.868134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.868247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.868254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:55.252 [2024-12-09 23:25:35.868261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:43:55.252 [2024-12-09 23:25:35.868267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.252 [2024-12-09 23:25:35.880551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.252 [2024-12-09 23:25:35.880576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:55.252 [2024-12-09 23:25:35.880584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:43:55.252 [2024-12-09 23:25:35.880590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.891289] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:43:55.511 [2024-12-09 23:25:35.891317] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:55.511 [2024-12-09 23:25:35.891327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.891335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:55.511 [2024-12-09 23:25:35.891342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.639 ms 00:43:55.511 [2024-12-09 23:25:35.891349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.909946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.909976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:55.511 [2024-12-09 23:25:35.909995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.540 ms 00:43:55.511 [2024-12-09 23:25:35.910003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.919369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.919395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:55.511 [2024-12-09 23:25:35.919403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.311 ms 00:43:55.511 [2024-12-09 23:25:35.919409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.928459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.928483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:55.511 [2024-12-09 23:25:35.928491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.008 ms 00:43:55.511 [2024-12-09 23:25:35.928498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.928971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.928999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:55.511 [2024-12-09 23:25:35.929008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:43:55.511 [2024-12-09 23:25:35.929014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.978234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:35.978267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:55.511 [2024-12-09 23:25:35.978277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.200 ms 00:43:55.511 [2024-12-09 23:25:35.978284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:35.986313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:55.511 [2024-12-09 23:25:36.000895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.001081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:55.511 [2024-12-09 23:25:36.001097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.547 ms 00:43:55.511 [2024-12-09 23:25:36.001108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.001197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.001207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:55.511 [2024-12-09 23:25:36.001215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:43:55.511 [2024-12-09 23:25:36.001222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.001268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.001275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:55.511 [2024-12-09 23:25:36.001282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:43:55.511 [2024-12-09 23:25:36.001293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.001316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.001324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:55.511 [2024-12-09 23:25:36.001331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:55.511 [2024-12-09 23:25:36.001336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.001366] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:55.511 [2024-12-09 23:25:36.001374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.001381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:55.511 [2024-12-09 23:25:36.001388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:43:55.511 [2024-12-09 23:25:36.001394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.020142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.020268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:55.511 [2024-12-09 23:25:36.020284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:43:55.511 [2024-12-09 23:25:36.020291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.020366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.511 [2024-12-09 23:25:36.020376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:55.511 [2024-12-09 23:25:36.020383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:43:55.511 [2024-12-09 23:25:36.020389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.511 [2024-12-09 23:25:36.021207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:55.511 [2024-12-09 23:25:36.023494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.940 ms, result 0 00:43:55.511 [2024-12-09 23:25:36.024599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:55.511 [2024-12-09 23:25:36.035399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:55.774  [2024-12-09T23:25:36.411Z] Copying: 4096/4096 [kB] (average 19 MBps)[2024-12-09 23:25:36.242159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:55.775 [2024-12-09 23:25:36.248523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.248552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:55.775 [2024-12-09 23:25:36.248565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:43:55.775 [2024-12-09 23:25:36.248571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.248587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:55.775 [2024-12-09 23:25:36.250787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.250811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:55.775 [2024-12-09 23:25:36.250819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:43:55.775 [2024-12-09 23:25:36.250826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.252387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.252484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:55.775 [2024-12-09 23:25:36.252496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:43:55.775 [2024-12-09 23:25:36.252502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.255584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.255665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:55.775 [2024-12-09 23:25:36.255675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:43:55.775 [2024-12-09 23:25:36.255681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.260906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.260929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:55.775 [2024-12-09 23:25:36.260937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:43:55.775 [2024-12-09 23:25:36.260944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.278195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.278290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:55.775 [2024-12-09 23:25:36.278302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.197 ms 00:43:55.775 [2024-12-09 23:25:36.278308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.290025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.290055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:55.775 [2024-12-09 23:25:36.290064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.671 ms 00:43:55.775 [2024-12-09 23:25:36.290073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.290173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.290183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:55.775 [2024-12-09 23:25:36.290196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:43:55.775 [2024-12-09 23:25:36.290202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.308134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.308158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:55.775 [2024-12-09 23:25:36.308166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.920 ms 00:43:55.775 [2024-12-09 23:25:36.308171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.325517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.325614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:55.775 [2024-12-09 23:25:36.325626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:43:55.775 [2024-12-09 23:25:36.325631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.342532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.342556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:55.775 [2024-12-09 23:25:36.342563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.871 ms 00:43:55.775 [2024-12-09 23:25:36.342569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.359537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.775 [2024-12-09 23:25:36.359562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:55.775 [2024-12-09 23:25:36.359569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.924 ms 00:43:55.775 [2024-12-09 23:25:36.359575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.775 [2024-12-09 23:25:36.359601] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:55.775 [2024-12-09 23:25:36.359612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:55.775 [2024-12-09 23:25:36.359808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.359997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:55.776 [2024-12-09 23:25:36.360230] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:55.776 [2024-12-09 23:25:36.360236] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:55.776 [2024-12-09 23:25:36.360243] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:55.776 [2024-12-09 23:25:36.360249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:55.776 [2024-12-09 23:25:36.360255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:55.776 [2024-12-09 23:25:36.360262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:55.776 [2024-12-09 23:25:36.360267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:55.776 [2024-12-09 23:25:36.360273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:55.776 [2024-12-09 23:25:36.360281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:55.776 [2024-12-09 23:25:36.360286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:55.776 [2024-12-09 23:25:36.360290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:55.777 [2024-12-09 23:25:36.360297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.777 [2024-12-09 23:25:36.360303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:55.777 [2024-12-09 23:25:36.360309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:43:55.777 [2024-12-09 23:25:36.360315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.369638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.777 [2024-12-09 23:25:36.369671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:55.777 [2024-12-09 23:25:36.369679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.310 ms 00:43:55.777 [2024-12-09 23:25:36.369685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.369976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:55.777 [2024-12-09 23:25:36.370000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:55.777 [2024-12-09 23:25:36.370007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:43:55.777 [2024-12-09 23:25:36.370013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.399039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:55.777 [2024-12-09 23:25:36.399067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:55.777 [2024-12-09 23:25:36.399075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:55.777 [2024-12-09 23:25:36.399084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.399145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:55.777 [2024-12-09 23:25:36.399153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:55.777 [2024-12-09 23:25:36.399160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:55.777 [2024-12-09 23:25:36.399165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.399197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:55.777 [2024-12-09 23:25:36.399204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:55.777 [2024-12-09 23:25:36.399211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:55.777 [2024-12-09 23:25:36.399216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:55.777 [2024-12-09 23:25:36.399232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:55.777 [2024-12-09 23:25:36.399238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:55.777 [2024-12-09 23:25:36.399244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:55.777 [2024-12-09 23:25:36.399251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.461817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.461854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:56.040 [2024-12-09 23:25:36.461864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.461875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:56.040 [2024-12-09 23:25:36.513115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:56.040 [2024-12-09 23:25:36.513178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:56.040 [2024-12-09 23:25:36.513227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:56.040 [2024-12-09 23:25:36.513327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:56.040 [2024-12-09 23:25:36.513376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:56.040 [2024-12-09 23:25:36.513431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:56.040 [2024-12-09 23:25:36.513487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:56.040 [2024-12-09 23:25:36.513493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:56.040 [2024-12-09 23:25:36.513499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:56.040 [2024-12-09 23:25:36.513623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.082 ms, result 0 00:43:56.610 00:43:56.610 00:43:56.610 23:25:37 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77144 00:43:56.610 23:25:37 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77144 00:43:56.610 23:25:37 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77144 ']' 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.610 23:25:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:43:56.610 [2024-12-09 23:25:37.181343] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:56.610 [2024-12-09 23:25:37.181624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77144 ] 00:43:56.869 [2024-12-09 23:25:37.332214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.869 [2024-12-09 23:25:37.419935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.436 23:25:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.436 23:25:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:43:57.436 23:25:38 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:43:57.695 [2024-12-09 23:25:38.225243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:57.695 [2024-12-09 23:25:38.225307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:57.955 [2024-12-09 23:25:38.393703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.393745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:57.955 [2024-12-09 23:25:38.393760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:57.955 [2024-12-09 23:25:38.393767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.395966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.396136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:57.955 [2024-12-09 23:25:38.396154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:43:57.955 [2024-12-09 23:25:38.396161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.396452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:57.955 [2024-12-09 23:25:38.397069] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:57.955 [2024-12-09 23:25:38.397101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.397108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:57.955 [2024-12-09 23:25:38.397117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:43:57.955 [2024-12-09 23:25:38.397123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.398456] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:57.955 [2024-12-09 23:25:38.408996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.409024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:57.955 [2024-12-09 23:25:38.409033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.543 ms 00:43:57.955 [2024-12-09 23:25:38.409042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.409115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.409125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:57.955 [2024-12-09 23:25:38.409132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:43:57.955 [2024-12-09 23:25:38.409139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.415426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.415567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:57.955 [2024-12-09 23:25:38.415579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:43:57.955 [2024-12-09 23:25:38.415587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.415666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.415675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:57.955 [2024-12-09 23:25:38.415681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:43:57.955 [2024-12-09 23:25:38.415692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.415709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.415717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:57.955 [2024-12-09 23:25:38.415724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:57.955 [2024-12-09 23:25:38.415731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.415748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:43:57.955 [2024-12-09 23:25:38.418752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.418849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:57.955 [2024-12-09 23:25:38.418864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:43:57.955 [2024-12-09 23:25:38.418870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.418905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.955 [2024-12-09 23:25:38.418911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:57.955 [2024-12-09 23:25:38.418920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:57.955 [2024-12-09 23:25:38.418927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.955 [2024-12-09 23:25:38.418945] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:57.955 [2024-12-09 23:25:38.418961] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:57.955 [2024-12-09 23:25:38.419010] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:57.955 [2024-12-09 23:25:38.419023] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:57.956 [2024-12-09 23:25:38.419107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:57.956 [2024-12-09 23:25:38.419116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:57.956 [2024-12-09 23:25:38.419127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:57.956 [2024-12-09 23:25:38.419136] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419144] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419151] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:43:57.956 [2024-12-09 23:25:38.419158] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:57.956 [2024-12-09 23:25:38.419163] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:57.956 [2024-12-09 23:25:38.419172] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:57.956 [2024-12-09 23:25:38.419180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.956 [2024-12-09 23:25:38.419187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:57.956 [2024-12-09 23:25:38.419193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:43:57.956 [2024-12-09 23:25:38.419199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.956 [2024-12-09 23:25:38.419278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.956 [2024-12-09 23:25:38.419287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:57.956 [2024-12-09 23:25:38.419293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:43:57.956 [2024-12-09 23:25:38.419300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.956 [2024-12-09 23:25:38.419378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:57.956 [2024-12-09 23:25:38.419388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:57.956 [2024-12-09 23:25:38.419395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:57.956 [2024-12-09 23:25:38.419422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:57.956 [2024-12-09 23:25:38.419443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:57.956 [2024-12-09 23:25:38.419455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:57.956 [2024-12-09 23:25:38.419464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:43:57.956 [2024-12-09 23:25:38.419469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:57.956 [2024-12-09 23:25:38.419476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:57.956 [2024-12-09 23:25:38.419482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:43:57.956 [2024-12-09 23:25:38.419489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:57.956 [2024-12-09 23:25:38.419501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:57.956 [2024-12-09 23:25:38.419522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:57.956 [2024-12-09 23:25:38.419541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:57.956 [2024-12-09 23:25:38.419558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:57.956 [2024-12-09 23:25:38.419577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:57.956 [2024-12-09 23:25:38.419593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:57.956 [2024-12-09 23:25:38.419606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:57.956 [2024-12-09 23:25:38.419612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:43:57.956 [2024-12-09 23:25:38.419617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:57.956 [2024-12-09 23:25:38.419624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:57.956 [2024-12-09 23:25:38.419628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:43:57.956 [2024-12-09 23:25:38.419636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:57.956 [2024-12-09 23:25:38.419648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:43:57.956 [2024-12-09 23:25:38.419653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:57.956 [2024-12-09 23:25:38.419668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:57.956 [2024-12-09 23:25:38.419676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:57.956 [2024-12-09 23:25:38.419691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:57.956 [2024-12-09 23:25:38.419697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:57.956 [2024-12-09 23:25:38.419704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:57.956 [2024-12-09 23:25:38.419709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:57.956 [2024-12-09 23:25:38.419715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:57.956 [2024-12-09 23:25:38.419721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:57.956 [2024-12-09 23:25:38.419729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:57.956 [2024-12-09 23:25:38.419736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:43:57.956 [2024-12-09 23:25:38.419753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:43:57.956 [2024-12-09 23:25:38.419760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:43:57.956 [2024-12-09 23:25:38.419766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:43:57.956 [2024-12-09 23:25:38.419773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:43:57.956 [2024-12-09 23:25:38.419778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:43:57.956 [2024-12-09 23:25:38.419785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:43:57.956 [2024-12-09 23:25:38.419790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:43:57.956 [2024-12-09 23:25:38.419797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:43:57.956 [2024-12-09 23:25:38.419803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:43:57.956 [2024-12-09 23:25:38.419833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:57.956 [2024-12-09 23:25:38.419840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:57.956 [2024-12-09 23:25:38.419855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:57.956 [2024-12-09 23:25:38.419862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:57.956 [2024-12-09 23:25:38.419868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:57.956 [2024-12-09 23:25:38.419876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.956 [2024-12-09 23:25:38.419883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:57.956 [2024-12-09 23:25:38.419890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:43:57.956 [2024-12-09 23:25:38.419897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.956 [2024-12-09 23:25:38.444211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.956 [2024-12-09 23:25:38.444240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:57.956 [2024-12-09 23:25:38.444250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.258 ms 00:43:57.957 [2024-12-09 23:25:38.444259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.444351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.444359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:57.957 [2024-12-09 23:25:38.444367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:43:57.957 [2024-12-09 23:25:38.444373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.470669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.470699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:57.957 [2024-12-09 23:25:38.470709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.276 ms 00:43:57.957 [2024-12-09 23:25:38.470715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.470763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.470770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:57.957 [2024-12-09 23:25:38.470778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:57.957 [2024-12-09 23:25:38.470784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.471185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.471198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:57.957 [2024-12-09 23:25:38.471209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:43:57.957 [2024-12-09 23:25:38.471215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.471327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.471335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:57.957 [2024-12-09 23:25:38.471343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:43:57.957 [2024-12-09 23:25:38.471349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.484932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.485094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:57.957 [2024-12-09 23:25:38.485110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:43:57.957 [2024-12-09 23:25:38.485117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.507537] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:43:57.957 [2024-12-09 23:25:38.507567] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:57.957 [2024-12-09 23:25:38.507581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.507588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:57.957 [2024-12-09 23:25:38.507597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.361 ms 00:43:57.957 [2024-12-09 23:25:38.507608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.526370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.526412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:57.957 [2024-12-09 23:25:38.526422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.700 ms 00:43:57.957 [2024-12-09 23:25:38.526429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.536114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.536139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:57.957 [2024-12-09 23:25:38.536150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.627 ms 00:43:57.957 [2024-12-09 23:25:38.536156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.545137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.545160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:57.957 [2024-12-09 23:25:38.545169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.938 ms 00:43:57.957 [2024-12-09 23:25:38.545175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.957 [2024-12-09 23:25:38.545641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.957 [2024-12-09 23:25:38.545664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:57.957 [2024-12-09 23:25:38.545673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:43:57.957 [2024-12-09 23:25:38.545679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.593966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.594007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:58.216 [2024-12-09 23:25:38.594019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.266 ms 00:43:58.216 [2024-12-09 23:25:38.594027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.602249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:43:58.216 [2024-12-09 23:25:38.616662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.616697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:58.216 [2024-12-09 23:25:38.616709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.562 ms 00:43:58.216 [2024-12-09 23:25:38.616717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.616781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.616791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:58.216 [2024-12-09 23:25:38.616798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:58.216 [2024-12-09 23:25:38.616806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.616853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.616862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:58.216 [2024-12-09 23:25:38.616870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:43:58.216 [2024-12-09 23:25:38.616880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.616899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.616907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:58.216 [2024-12-09 23:25:38.616913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:58.216 [2024-12-09 23:25:38.616922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.616950] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:58.216 [2024-12-09 23:25:38.616961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.616969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:58.216 [2024-12-09 23:25:38.616976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:58.216 [2024-12-09 23:25:38.617001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.636000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.636026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:58.216 [2024-12-09 23:25:38.636037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.977 ms 00:43:58.216 [2024-12-09 23:25:38.636044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.636118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.216 [2024-12-09 23:25:38.636126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:58.216 [2024-12-09 23:25:38.636135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:43:58.216 [2024-12-09 23:25:38.636144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.216 [2024-12-09 23:25:38.636901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:58.216 [2024-12-09 23:25:38.639271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 242.942 ms, result 0 00:43:58.216 [2024-12-09 23:25:38.641857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:58.216 Some configs were skipped because the RPC state that can call them passed over. 00:43:58.216 23:25:38 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:43:58.475 [2024-12-09 23:25:38.863231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.475 [2024-12-09 23:25:38.863272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:43:58.475 [2024-12-09 23:25:38.863282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:43:58.475 [2024-12-09 23:25:38.863290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.475 [2024-12-09 23:25:38.863316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.647 ms, result 0 00:43:58.475 true 00:43:58.475 23:25:38 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:43:58.475 [2024-12-09 23:25:39.059556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.475 [2024-12-09 23:25:39.059586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:43:58.475 [2024-12-09 23:25:39.059596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:43:58.475 [2024-12-09 23:25:39.059603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.475 [2024-12-09 23:25:39.059630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.757 ms, result 0 00:43:58.475 true 00:43:58.475 23:25:39 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77144 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77144 ']' 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77144 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77144 00:43:58.475 killing process with pid 77144 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77144' 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77144 00:43:58.475 23:25:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77144 00:43:59.042 [2024-12-09 23:25:39.666347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.042 [2024-12-09 23:25:39.666406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:59.042 [2024-12-09 23:25:39.666418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:59.042 [2024-12-09 23:25:39.666427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.042 [2024-12-09 23:25:39.666449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:43:59.042 [2024-12-09 23:25:39.668639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.042 [2024-12-09 23:25:39.668667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:59.042 [2024-12-09 23:25:39.668680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:43:59.042 [2024-12-09 23:25:39.668686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.042 [2024-12-09 23:25:39.668950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.042 [2024-12-09 23:25:39.668965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:59.042 [2024-12-09 23:25:39.668974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:43:59.042 [2024-12-09 23:25:39.668980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.042 [2024-12-09 23:25:39.672095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.042 [2024-12-09 23:25:39.672120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:59.042 [2024-12-09 23:25:39.672131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:43:59.042 [2024-12-09 23:25:39.672137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.043 [2024-12-09 23:25:39.677344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.677504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:59.303 [2024-12-09 23:25:39.677525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:43:59.303 [2024-12-09 23:25:39.677532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.684652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.684763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:59.303 [2024-12-09 23:25:39.684779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.050 ms 00:43:59.303 [2024-12-09 23:25:39.684786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.691439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.691548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:59.303 [2024-12-09 23:25:39.691562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.621 ms 00:43:59.303 [2024-12-09 23:25:39.691569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.691681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.691690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:59.303 [2024-12-09 23:25:39.691698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:43:59.303 [2024-12-09 23:25:39.691704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.699244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.699268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:59.303 [2024-12-09 23:25:39.699277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:43:59.303 [2024-12-09 23:25:39.699283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.706458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.706553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:59.303 [2024-12-09 23:25:39.706572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.134 ms 00:43:59.303 [2024-12-09 23:25:39.706577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.713943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.714040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:59.303 [2024-12-09 23:25:39.714092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.329 ms 00:43:59.303 [2024-12-09 23:25:39.714110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.721171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.303 [2024-12-09 23:25:39.721257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:59.303 [2024-12-09 23:25:39.721297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:43:59.303 [2024-12-09 23:25:39.721315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.303 [2024-12-09 23:25:39.721555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:59.303 [2024-12-09 23:25:39.721613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.721846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.721913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.721961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.721997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.722964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:59.303 [2024-12-09 23:25:39.723474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.723971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:59.304 [2024-12-09 23:25:39.724934] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:59.304 [2024-12-09 23:25:39.724958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:43:59.304 [2024-12-09 23:25:39.724994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:59.304 [2024-12-09 23:25:39.725011] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:59.304 [2024-12-09 23:25:39.725027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:59.304 [2024-12-09 23:25:39.725071] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:59.304 [2024-12-09 23:25:39.725114] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:59.304 [2024-12-09 23:25:39.725134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:59.304 [2024-12-09 23:25:39.725164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:59.304 [2024-12-09 23:25:39.725182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:59.304 [2024-12-09 23:25:39.725196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:59.304 [2024-12-09 23:25:39.725214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.304 [2024-12-09 23:25:39.725229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:59.304 [2024-12-09 23:25:39.725248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:43:59.304 [2024-12-09 23:25:39.725262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.735351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.304 [2024-12-09 23:25:39.735436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:59.304 [2024-12-09 23:25:39.735483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.042 ms 00:43:59.304 [2024-12-09 23:25:39.735501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.735854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:59.304 [2024-12-09 23:25:39.735925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:59.304 [2024-12-09 23:25:39.736050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:43:59.304 [2024-12-09 23:25:39.736156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.772757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.772807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:59.304 [2024-12-09 23:25:39.772831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.772848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.772949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.772970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:59.304 [2024-12-09 23:25:39.773003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.773023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.773209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.773241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:59.304 [2024-12-09 23:25:39.773297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.773316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.773344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.773383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:59.304 [2024-12-09 23:25:39.773402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.773419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.836119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.836218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:59.304 [2024-12-09 23:25:39.836232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.836239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.886925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.887063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:59.304 [2024-12-09 23:25:39.887079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.887089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.887173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.887182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:59.304 [2024-12-09 23:25:39.887192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.887199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.887226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.887234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:59.304 [2024-12-09 23:25:39.887242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.887248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.887329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.887337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:59.304 [2024-12-09 23:25:39.887345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.304 [2024-12-09 23:25:39.887352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.304 [2024-12-09 23:25:39.887381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.304 [2024-12-09 23:25:39.887389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:59.305 [2024-12-09 23:25:39.887397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.305 [2024-12-09 23:25:39.887404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.305 [2024-12-09 23:25:39.887445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.305 [2024-12-09 23:25:39.887453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:59.305 [2024-12-09 23:25:39.887463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.305 [2024-12-09 23:25:39.887470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.305 [2024-12-09 23:25:39.887514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:59.305 [2024-12-09 23:25:39.887523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:59.305 [2024-12-09 23:25:39.887530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:59.305 [2024-12-09 23:25:39.887537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:59.305 [2024-12-09 23:25:39.887666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 221.296 ms, result 0 00:43:59.873 23:25:40 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:00.132 [2024-12-09 23:25:40.508462] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:00.132 [2024-12-09 23:25:40.508706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77191 ] 00:44:00.132 [2024-12-09 23:25:40.664054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.132 [2024-12-09 23:25:40.750874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.397 [2024-12-09 23:25:40.983415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:00.397 [2024-12-09 23:25:40.983477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:00.677 [2024-12-09 23:25:41.135572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.135615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:00.677 [2024-12-09 23:25:41.135626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:00.677 [2024-12-09 23:25:41.135633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.137831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.137863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:00.677 [2024-12-09 23:25:41.137871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:44:00.677 [2024-12-09 23:25:41.137877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.137935] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:00.677 [2024-12-09 23:25:41.138536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:00.677 [2024-12-09 23:25:41.138561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.138567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:00.677 [2024-12-09 23:25:41.138574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:44:00.677 [2024-12-09 23:25:41.138580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.139851] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:00.677 [2024-12-09 23:25:41.149892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.150066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:00.677 [2024-12-09 23:25:41.150083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.042 ms 00:44:00.677 [2024-12-09 23:25:41.150090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.150162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.150171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:00.677 [2024-12-09 23:25:41.150178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:44:00.677 [2024-12-09 23:25:41.150184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.156282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.156308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:00.677 [2024-12-09 23:25:41.156316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:44:00.677 [2024-12-09 23:25:41.156322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.156394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.156403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:00.677 [2024-12-09 23:25:41.156409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:44:00.677 [2024-12-09 23:25:41.156416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.156434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.156442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:00.677 [2024-12-09 23:25:41.156449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:00.677 [2024-12-09 23:25:41.156455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.156473] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:00.677 [2024-12-09 23:25:41.159391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.159413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:00.677 [2024-12-09 23:25:41.159421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:44:00.677 [2024-12-09 23:25:41.159427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.159459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.677 [2024-12-09 23:25:41.159466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:00.677 [2024-12-09 23:25:41.159473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:00.677 [2024-12-09 23:25:41.159479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.677 [2024-12-09 23:25:41.159496] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:00.677 [2024-12-09 23:25:41.159513] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:00.677 [2024-12-09 23:25:41.159542] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:00.677 [2024-12-09 23:25:41.159555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:00.677 [2024-12-09 23:25:41.159639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:00.677 [2024-12-09 23:25:41.159648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:00.677 [2024-12-09 23:25:41.159657] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:00.677 [2024-12-09 23:25:41.159668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:00.677 [2024-12-09 23:25:41.159674] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:00.677 [2024-12-09 23:25:41.159681] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:00.677 [2024-12-09 23:25:41.159688] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:00.677 [2024-12-09 23:25:41.159694] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:00.677 [2024-12-09 23:25:41.159700] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:00.677 [2024-12-09 23:25:41.159706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.159712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:00.678 [2024-12-09 23:25:41.159719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:44:00.678 [2024-12-09 23:25:41.159725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.159792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.159801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:00.678 [2024-12-09 23:25:41.159807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:44:00.678 [2024-12-09 23:25:41.159813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.159889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:00.678 [2024-12-09 23:25:41.159898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:00.678 [2024-12-09 23:25:41.159905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:00.678 [2024-12-09 23:25:41.159911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.159917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:00.678 [2024-12-09 23:25:41.159923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.159928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:00.678 [2024-12-09 23:25:41.159935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:00.678 [2024-12-09 23:25:41.159940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:00.678 [2024-12-09 23:25:41.159945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:00.678 [2024-12-09 23:25:41.159950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:00.678 [2024-12-09 23:25:41.159962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:00.678 [2024-12-09 23:25:41.159967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:00.678 [2024-12-09 23:25:41.159973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:00.678 [2024-12-09 23:25:41.159978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:00.678 [2024-12-09 23:25:41.159996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:00.678 [2024-12-09 23:25:41.160010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:00.678 [2024-12-09 23:25:41.160026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:00.678 [2024-12-09 23:25:41.160046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:00.678 [2024-12-09 23:25:41.160061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:00.678 [2024-12-09 23:25:41.160077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:00.678 [2024-12-09 23:25:41.160100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:00.678 [2024-12-09 23:25:41.160110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:00.678 [2024-12-09 23:25:41.160115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:00.678 [2024-12-09 23:25:41.160121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:00.678 [2024-12-09 23:25:41.160126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:00.678 [2024-12-09 23:25:41.160131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:00.678 [2024-12-09 23:25:41.160136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:00.678 [2024-12-09 23:25:41.160146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:00.678 [2024-12-09 23:25:41.160151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160157] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:00.678 [2024-12-09 23:25:41.160163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:00.678 [2024-12-09 23:25:41.160172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:00.678 [2024-12-09 23:25:41.160184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:00.678 [2024-12-09 23:25:41.160190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:00.678 [2024-12-09 23:25:41.160195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:00.678 [2024-12-09 23:25:41.160200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:00.678 [2024-12-09 23:25:41.160206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:00.678 [2024-12-09 23:25:41.160211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:00.678 [2024-12-09 23:25:41.160219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:00.678 [2024-12-09 23:25:41.160227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:00.678 [2024-12-09 23:25:41.160239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:00.678 [2024-12-09 23:25:41.160245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:00.678 [2024-12-09 23:25:41.160250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:00.678 [2024-12-09 23:25:41.160256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:00.678 [2024-12-09 23:25:41.160262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:00.678 [2024-12-09 23:25:41.160267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:00.678 [2024-12-09 23:25:41.160273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:00.678 [2024-12-09 23:25:41.160278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:00.678 [2024-12-09 23:25:41.160283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:00.678 [2024-12-09 23:25:41.160311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:00.678 [2024-12-09 23:25:41.160317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:00.678 [2024-12-09 23:25:41.160329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:00.678 [2024-12-09 23:25:41.160335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:00.678 [2024-12-09 23:25:41.160341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:00.678 [2024-12-09 23:25:41.160346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.160357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:00.678 [2024-12-09 23:25:41.160362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:44:00.678 [2024-12-09 23:25:41.160368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.184426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.184456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:00.678 [2024-12-09 23:25:41.184466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.997 ms 00:44:00.678 [2024-12-09 23:25:41.184472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.184569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.184578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:00.678 [2024-12-09 23:25:41.184585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:44:00.678 [2024-12-09 23:25:41.184591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.222978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.223015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:00.678 [2024-12-09 23:25:41.223026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.370 ms 00:44:00.678 [2024-12-09 23:25:41.223033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.223092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.678 [2024-12-09 23:25:41.223102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:00.678 [2024-12-09 23:25:41.223109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:44:00.678 [2024-12-09 23:25:41.223115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.678 [2024-12-09 23:25:41.223489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.223512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:00.679 [2024-12-09 23:25:41.223520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:44:00.679 [2024-12-09 23:25:41.223530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.223643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.223658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:00.679 [2024-12-09 23:25:41.223665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:44:00.679 [2024-12-09 23:25:41.223672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.235865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.235891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:00.679 [2024-12-09 23:25:41.235899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.176 ms 00:44:00.679 [2024-12-09 23:25:41.235906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.245925] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:00.679 [2024-12-09 23:25:41.245954] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:00.679 [2024-12-09 23:25:41.245965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.245972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:00.679 [2024-12-09 23:25:41.245978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.947 ms 00:44:00.679 [2024-12-09 23:25:41.246007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.278779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.278822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:00.679 [2024-12-09 23:25:41.278836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.706 ms 00:44:00.679 [2024-12-09 23:25:41.278846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.290418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.290450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:00.679 [2024-12-09 23:25:41.290460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.508 ms 00:44:00.679 [2024-12-09 23:25:41.290467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.301635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.301807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:00.679 [2024-12-09 23:25:41.301823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.101 ms 00:44:00.679 [2024-12-09 23:25:41.301831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.679 [2024-12-09 23:25:41.302463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.679 [2024-12-09 23:25:41.302484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:00.679 [2024-12-09 23:25:41.302494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:44:00.679 [2024-12-09 23:25:41.302502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.361318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.361356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:00.940 [2024-12-09 23:25:41.361367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.793 ms 00:44:00.940 [2024-12-09 23:25:41.361375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.371966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:00.940 [2024-12-09 23:25:41.388154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.388188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:00.940 [2024-12-09 23:25:41.388200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.693 ms 00:44:00.940 [2024-12-09 23:25:41.388212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.388286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.388298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:00.940 [2024-12-09 23:25:41.388306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:00.940 [2024-12-09 23:25:41.388314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.388364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.388374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:00.940 [2024-12-09 23:25:41.388383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:44:00.940 [2024-12-09 23:25:41.388394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.388428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.388437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:00.940 [2024-12-09 23:25:41.388446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:00.940 [2024-12-09 23:25:41.388454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.388487] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:00.940 [2024-12-09 23:25:41.388497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.388505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:00.940 [2024-12-09 23:25:41.388514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:00.940 [2024-12-09 23:25:41.388521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.412153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.412184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:00.940 [2024-12-09 23:25:41.412196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.609 ms 00:44:00.940 [2024-12-09 23:25:41.412204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.412295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:00.940 [2024-12-09 23:25:41.412306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:00.940 [2024-12-09 23:25:41.412315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:44:00.940 [2024-12-09 23:25:41.412322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:00.940 [2024-12-09 23:25:41.413230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:00.940 [2024-12-09 23:25:41.416233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.323 ms, result 0 00:44:00.940 [2024-12-09 23:25:41.416830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:00.940 [2024-12-09 23:25:41.429756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:01.885  [2024-12-09T23:25:43.908Z] Copying: 34/256 [MB] (34 MBps) [2024-12-09T23:25:44.848Z] Copying: 68/256 [MB] (33 MBps) [2024-12-09T23:25:45.791Z] Copying: 107/256 [MB] (38 MBps) [2024-12-09T23:25:46.733Z] Copying: 148/256 [MB] (41 MBps) [2024-12-09T23:25:47.675Z] Copying: 186/256 [MB] (37 MBps) [2024-12-09T23:25:48.615Z] Copying: 224/256 [MB] (38 MBps) [2024-12-09T23:25:48.878Z] Copying: 256/256 [MB] (average 37 MBps)[2024-12-09 23:25:48.683124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:08.242 [2024-12-09 23:25:48.696813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.696851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:08.242 [2024-12-09 23:25:48.696872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:08.242 [2024-12-09 23:25:48.696881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.696908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:08.242 [2024-12-09 23:25:48.699791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.699821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:08.242 [2024-12-09 23:25:48.699832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:44:08.242 [2024-12-09 23:25:48.699842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.700133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.700145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:08.242 [2024-12-09 23:25:48.700154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:44:08.242 [2024-12-09 23:25:48.700162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.703858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.703881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:08.242 [2024-12-09 23:25:48.703891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:44:08.242 [2024-12-09 23:25:48.703900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.710759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.710786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:08.242 [2024-12-09 23:25:48.710797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.842 ms 00:44:08.242 [2024-12-09 23:25:48.710805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.740487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.740522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:08.242 [2024-12-09 23:25:48.740534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.616 ms 00:44:08.242 [2024-12-09 23:25:48.740542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.754537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.754571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:08.242 [2024-12-09 23:25:48.754586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.970 ms 00:44:08.242 [2024-12-09 23:25:48.754595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.755677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.242 [2024-12-09 23:25:48.755700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:08.242 [2024-12-09 23:25:48.755718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:44:08.242 [2024-12-09 23:25:48.755727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.242 [2024-12-09 23:25:48.779976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.243 [2024-12-09 23:25:48.780016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:08.243 [2024-12-09 23:25:48.780026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.231 ms 00:44:08.243 [2024-12-09 23:25:48.780034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.243 [2024-12-09 23:25:48.803761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.243 [2024-12-09 23:25:48.803951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:08.243 [2024-12-09 23:25:48.803970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.706 ms 00:44:08.243 [2024-12-09 23:25:48.803977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.243 [2024-12-09 23:25:48.826407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.243 [2024-12-09 23:25:48.826542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:08.243 [2024-12-09 23:25:48.826557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.389 ms 00:44:08.243 [2024-12-09 23:25:48.826564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.243 [2024-12-09 23:25:48.849541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.243 [2024-12-09 23:25:48.849581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:08.243 [2024-12-09 23:25:48.849593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:44:08.243 [2024-12-09 23:25:48.849601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.243 [2024-12-09 23:25:48.849626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:08.243 [2024-12-09 23:25:48.849641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.849979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:08.243 [2024-12-09 23:25:48.850267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:08.244 [2024-12-09 23:25:48.850471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:08.244 [2024-12-09 23:25:48.850485] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9073a2f6-168e-47e0-b22f-dc807919fd17 00:44:08.244 [2024-12-09 23:25:48.850494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:08.244 [2024-12-09 23:25:48.850501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:08.244 [2024-12-09 23:25:48.850511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:08.244 [2024-12-09 23:25:48.850518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:08.244 [2024-12-09 23:25:48.850525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:08.244 [2024-12-09 23:25:48.850533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:08.244 [2024-12-09 23:25:48.850544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:08.244 [2024-12-09 23:25:48.850551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:08.244 [2024-12-09 23:25:48.850557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:08.244 [2024-12-09 23:25:48.850565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.244 [2024-12-09 23:25:48.850572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:08.244 [2024-12-09 23:25:48.850580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:44:08.244 [2024-12-09 23:25:48.850588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.244 [2024-12-09 23:25:48.863291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.244 [2024-12-09 23:25:48.863320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:08.244 [2024-12-09 23:25:48.863331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:44:08.244 [2024-12-09 23:25:48.863339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.244 [2024-12-09 23:25:48.863713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.244 [2024-12-09 23:25:48.863733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:08.244 [2024-12-09 23:25:48.863742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:44:08.244 [2024-12-09 23:25:48.863750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.505 [2024-12-09 23:25:48.901285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.505 [2024-12-09 23:25:48.901437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:08.505 [2024-12-09 23:25:48.901454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.505 [2024-12-09 23:25:48.901468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.505 [2024-12-09 23:25:48.901560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.505 [2024-12-09 23:25:48.901570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:08.505 [2024-12-09 23:25:48.901578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.505 [2024-12-09 23:25:48.901586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:48.901630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:48.901640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:08.506 [2024-12-09 23:25:48.901648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:48.901655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:48.901684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:48.901692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:08.506 [2024-12-09 23:25:48.901701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:48.901709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:48.984029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:48.984199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:08.506 [2024-12-09 23:25:48.984216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:48.984225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.050718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.050883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:08.506 [2024-12-09 23:25:49.050899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.050909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:08.506 [2024-12-09 23:25:49.051028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:08.506 [2024-12-09 23:25:49.051089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:08.506 [2024-12-09 23:25:49.051212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:08.506 [2024-12-09 23:25:49.051278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:08.506 [2024-12-09 23:25:49.051350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.506 [2024-12-09 23:25:49.051418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:08.506 [2024-12-09 23:25:49.051427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.506 [2024-12-09 23:25:49.051434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.506 [2024-12-09 23:25:49.051584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.766 ms, result 0 00:44:09.449 00:44:09.449 00:44:09.449 23:25:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:44:09.710 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:44:09.710 23:25:50 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:44:09.710 23:25:50 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:44:09.710 23:25:50 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:44:09.710 23:25:50 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:09.710 23:25:50 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:44:09.970 23:25:50 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:44:09.970 Process with pid 77144 is not found 00:44:09.970 23:25:50 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77144 00:44:09.970 23:25:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77144 ']' 00:44:09.970 23:25:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77144 00:44:09.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77144) - No such process 00:44:09.970 23:25:50 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77144 is not found' 00:44:09.970 ************************************ 00:44:09.970 END TEST ftl_trim 00:44:09.970 ************************************ 00:44:09.970 00:44:09.970 real 1m10.489s 00:44:09.970 user 1m29.139s 00:44:09.970 sys 0m10.275s 00:44:09.970 23:25:50 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:09.970 23:25:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:44:09.970 23:25:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:44:09.970 23:25:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:44:09.970 23:25:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:09.970 23:25:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:44:09.970 ************************************ 00:44:09.970 START TEST ftl_restore 00:44:09.970 ************************************ 00:44:09.970 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:44:09.970 * Looking for test storage... 00:44:09.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:09.970 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:09.970 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:44:09.970 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:09.970 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:09.970 23:25:50 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:09.970 23:25:50 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:09.970 23:25:50 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:09.970 23:25:50 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:09.971 23:25:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:44:09.971 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:09.971 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.971 --rc genhtml_branch_coverage=1 00:44:09.971 --rc genhtml_function_coverage=1 00:44:09.971 --rc genhtml_legend=1 00:44:09.971 --rc geninfo_all_blocks=1 00:44:09.971 --rc geninfo_unexecuted_blocks=1 00:44:09.971 00:44:09.971 ' 00:44:09.971 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.971 --rc genhtml_branch_coverage=1 00:44:09.971 --rc genhtml_function_coverage=1 00:44:09.971 --rc genhtml_legend=1 00:44:09.971 --rc geninfo_all_blocks=1 00:44:09.971 --rc geninfo_unexecuted_blocks=1 00:44:09.971 00:44:09.971 ' 00:44:09.971 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.971 --rc genhtml_branch_coverage=1 00:44:09.971 --rc genhtml_function_coverage=1 00:44:09.971 --rc genhtml_legend=1 00:44:09.971 --rc geninfo_all_blocks=1 00:44:09.971 --rc geninfo_unexecuted_blocks=1 00:44:09.971 00:44:09.971 ' 00:44:09.971 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.971 --rc genhtml_branch_coverage=1 00:44:09.971 --rc genhtml_function_coverage=1 00:44:09.971 --rc genhtml_legend=1 00:44:09.971 --rc geninfo_all_blocks=1 00:44:09.971 --rc geninfo_unexecuted_blocks=1 00:44:09.971 00:44:09.971 ' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:09.971 23:25:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zaVv59B8WH 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77362 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77362 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77362 ']' 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:10.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:10.232 23:25:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:10.232 23:25:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:44:10.232 [2024-12-09 23:25:50.690353] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:10.232 [2024-12-09 23:25:50.690664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77362 ] 00:44:10.232 [2024-12-09 23:25:50.853042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:10.494 [2024-12-09 23:25:50.963508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:11.066 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:11.067 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:44:11.067 23:25:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:11.328 23:25:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:11.328 23:25:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:44:11.328 23:25:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:11.328 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:44:11.328 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:11.328 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:44:11.328 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:44:11.328 23:25:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:11.590 { 00:44:11.590 "name": "nvme0n1", 00:44:11.590 "aliases": [ 00:44:11.590 "e19bce55-6b87-4d34-a4c1-0b2f46fbd85c" 00:44:11.590 ], 00:44:11.590 "product_name": "NVMe disk", 00:44:11.590 "block_size": 4096, 00:44:11.590 "num_blocks": 1310720, 00:44:11.590 "uuid": "e19bce55-6b87-4d34-a4c1-0b2f46fbd85c", 00:44:11.590 "numa_id": -1, 00:44:11.590 "assigned_rate_limits": { 00:44:11.590 "rw_ios_per_sec": 0, 00:44:11.590 "rw_mbytes_per_sec": 0, 00:44:11.590 "r_mbytes_per_sec": 0, 00:44:11.590 "w_mbytes_per_sec": 0 00:44:11.590 }, 00:44:11.590 "claimed": true, 00:44:11.590 "claim_type": "read_many_write_one", 00:44:11.590 "zoned": false, 00:44:11.590 "supported_io_types": { 00:44:11.590 "read": true, 00:44:11.590 "write": true, 00:44:11.590 "unmap": true, 00:44:11.590 "flush": true, 00:44:11.590 "reset": true, 00:44:11.590 "nvme_admin": true, 00:44:11.590 "nvme_io": true, 00:44:11.590 "nvme_io_md": false, 00:44:11.590 "write_zeroes": true, 00:44:11.590 "zcopy": false, 00:44:11.590 "get_zone_info": false, 00:44:11.590 "zone_management": false, 00:44:11.590 "zone_append": false, 00:44:11.590 "compare": true, 00:44:11.590 "compare_and_write": false, 00:44:11.590 "abort": true, 00:44:11.590 "seek_hole": false, 00:44:11.590 "seek_data": false, 00:44:11.590 "copy": true, 00:44:11.590 "nvme_iov_md": false 00:44:11.590 }, 00:44:11.590 "driver_specific": { 00:44:11.590 "nvme": [ 00:44:11.590 { 00:44:11.590 "pci_address": "0000:00:11.0", 00:44:11.590 "trid": { 00:44:11.590 "trtype": "PCIe", 00:44:11.590 "traddr": "0000:00:11.0" 00:44:11.590 }, 00:44:11.590 "ctrlr_data": { 00:44:11.590 "cntlid": 0, 00:44:11.590 "vendor_id": "0x1b36", 00:44:11.590 "model_number": "QEMU NVMe Ctrl", 00:44:11.590 "serial_number": "12341", 00:44:11.590 "firmware_revision": "8.0.0", 00:44:11.590 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:11.590 "oacs": { 00:44:11.590 "security": 0, 00:44:11.590 "format": 1, 00:44:11.590 "firmware": 0, 00:44:11.590 "ns_manage": 1 00:44:11.590 }, 00:44:11.590 "multi_ctrlr": false, 00:44:11.590 "ana_reporting": false 00:44:11.590 }, 00:44:11.590 "vs": { 00:44:11.590 "nvme_version": "1.4" 00:44:11.590 }, 00:44:11.590 "ns_data": { 00:44:11.590 "id": 1, 00:44:11.590 "can_share": false 00:44:11.590 } 00:44:11.590 } 00:44:11.590 ], 00:44:11.590 "mp_policy": "active_passive" 00:44:11.590 } 00:44:11.590 } 00:44:11.590 ]' 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:44:11.590 23:25:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:44:11.590 23:25:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:44:11.590 23:25:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:11.590 23:25:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:44:11.590 23:25:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:11.590 23:25:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:11.851 23:25:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d9d5ea24-7ad4-48a7-901d-f110743336ff 00:44:11.851 23:25:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:44:11.851 23:25:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9d5ea24-7ad4-48a7-901d-f110743336ff 00:44:12.112 23:25:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:12.373 23:25:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=301a8f3f-17b8-4408-8a05-e317da6c2e3e 00:44:12.373 23:25:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 301a8f3f-17b8-4408-8a05-e317da6c2e3e 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:44:12.635 23:25:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:12.635 { 00:44:12.635 "name": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:12.635 "aliases": [ 00:44:12.635 "lvs/nvme0n1p0" 00:44:12.635 ], 00:44:12.635 "product_name": "Logical Volume", 00:44:12.635 "block_size": 4096, 00:44:12.635 "num_blocks": 26476544, 00:44:12.635 "uuid": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:12.635 "assigned_rate_limits": { 00:44:12.635 "rw_ios_per_sec": 0, 00:44:12.635 "rw_mbytes_per_sec": 0, 00:44:12.635 "r_mbytes_per_sec": 0, 00:44:12.635 "w_mbytes_per_sec": 0 00:44:12.635 }, 00:44:12.635 "claimed": false, 00:44:12.635 "zoned": false, 00:44:12.635 "supported_io_types": { 00:44:12.635 "read": true, 00:44:12.635 "write": true, 00:44:12.635 "unmap": true, 00:44:12.635 "flush": false, 00:44:12.635 "reset": true, 00:44:12.635 "nvme_admin": false, 00:44:12.635 "nvme_io": false, 00:44:12.635 "nvme_io_md": false, 00:44:12.635 "write_zeroes": true, 00:44:12.635 "zcopy": false, 00:44:12.635 "get_zone_info": false, 00:44:12.635 "zone_management": false, 00:44:12.635 "zone_append": false, 00:44:12.635 "compare": false, 00:44:12.635 "compare_and_write": false, 00:44:12.635 "abort": false, 00:44:12.635 "seek_hole": true, 00:44:12.635 "seek_data": true, 00:44:12.635 "copy": false, 00:44:12.635 "nvme_iov_md": false 00:44:12.635 }, 00:44:12.635 "driver_specific": { 00:44:12.635 "lvol": { 00:44:12.635 "lvol_store_uuid": "301a8f3f-17b8-4408-8a05-e317da6c2e3e", 00:44:12.635 "base_bdev": "nvme0n1", 00:44:12.635 "thin_provision": true, 00:44:12.635 "num_allocated_clusters": 0, 00:44:12.635 "snapshot": false, 00:44:12.635 "clone": false, 00:44:12.635 "esnap_clone": false 00:44:12.635 } 00:44:12.635 } 00:44:12.635 } 00:44:12.635 ]' 00:44:12.635 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:12.897 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:44:12.897 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:12.897 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:12.898 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:12.898 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:44:12.898 23:25:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:44:12.898 23:25:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:44:12.898 23:25:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:13.156 23:25:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:13.156 23:25:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:13.156 23:25:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:13.156 { 00:44:13.156 "name": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:13.156 "aliases": [ 00:44:13.156 "lvs/nvme0n1p0" 00:44:13.156 ], 00:44:13.156 "product_name": "Logical Volume", 00:44:13.156 "block_size": 4096, 00:44:13.156 "num_blocks": 26476544, 00:44:13.156 "uuid": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:13.156 "assigned_rate_limits": { 00:44:13.156 "rw_ios_per_sec": 0, 00:44:13.156 "rw_mbytes_per_sec": 0, 00:44:13.156 "r_mbytes_per_sec": 0, 00:44:13.156 "w_mbytes_per_sec": 0 00:44:13.156 }, 00:44:13.156 "claimed": false, 00:44:13.156 "zoned": false, 00:44:13.156 "supported_io_types": { 00:44:13.156 "read": true, 00:44:13.156 "write": true, 00:44:13.156 "unmap": true, 00:44:13.156 "flush": false, 00:44:13.156 "reset": true, 00:44:13.156 "nvme_admin": false, 00:44:13.156 "nvme_io": false, 00:44:13.156 "nvme_io_md": false, 00:44:13.156 "write_zeroes": true, 00:44:13.156 "zcopy": false, 00:44:13.156 "get_zone_info": false, 00:44:13.156 "zone_management": false, 00:44:13.156 "zone_append": false, 00:44:13.156 "compare": false, 00:44:13.156 "compare_and_write": false, 00:44:13.156 "abort": false, 00:44:13.156 "seek_hole": true, 00:44:13.156 "seek_data": true, 00:44:13.156 "copy": false, 00:44:13.156 "nvme_iov_md": false 00:44:13.156 }, 00:44:13.156 "driver_specific": { 00:44:13.156 "lvol": { 00:44:13.156 "lvol_store_uuid": "301a8f3f-17b8-4408-8a05-e317da6c2e3e", 00:44:13.156 "base_bdev": "nvme0n1", 00:44:13.156 "thin_provision": true, 00:44:13.156 "num_allocated_clusters": 0, 00:44:13.156 "snapshot": false, 00:44:13.156 "clone": false, 00:44:13.156 "esnap_clone": false 00:44:13.156 } 00:44:13.156 } 00:44:13.156 } 00:44:13.156 ]' 00:44:13.156 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:13.414 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:44:13.414 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:13.414 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:13.414 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:13.414 23:25:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:44:13.414 23:25:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:44:13.414 23:25:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:13.414 23:25:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:44:13.414 23:25:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.414 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.414 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:13.414 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:44:13.414 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:44:13.414 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77843652-a46e-498d-9c05-a4ebb9f89e77 00:44:13.672 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:13.672 { 00:44:13.672 "name": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:13.672 "aliases": [ 00:44:13.672 "lvs/nvme0n1p0" 00:44:13.672 ], 00:44:13.672 "product_name": "Logical Volume", 00:44:13.672 "block_size": 4096, 00:44:13.672 "num_blocks": 26476544, 00:44:13.672 "uuid": "77843652-a46e-498d-9c05-a4ebb9f89e77", 00:44:13.672 "assigned_rate_limits": { 00:44:13.672 "rw_ios_per_sec": 0, 00:44:13.672 "rw_mbytes_per_sec": 0, 00:44:13.672 "r_mbytes_per_sec": 0, 00:44:13.672 "w_mbytes_per_sec": 0 00:44:13.672 }, 00:44:13.672 "claimed": false, 00:44:13.672 "zoned": false, 00:44:13.672 "supported_io_types": { 00:44:13.672 "read": true, 00:44:13.672 "write": true, 00:44:13.672 "unmap": true, 00:44:13.672 "flush": false, 00:44:13.672 "reset": true, 00:44:13.672 "nvme_admin": false, 00:44:13.672 "nvme_io": false, 00:44:13.672 "nvme_io_md": false, 00:44:13.672 "write_zeroes": true, 00:44:13.672 "zcopy": false, 00:44:13.672 "get_zone_info": false, 00:44:13.672 "zone_management": false, 00:44:13.672 "zone_append": false, 00:44:13.672 "compare": false, 00:44:13.672 "compare_and_write": false, 00:44:13.672 "abort": false, 00:44:13.672 "seek_hole": true, 00:44:13.672 "seek_data": true, 00:44:13.672 "copy": false, 00:44:13.672 "nvme_iov_md": false 00:44:13.672 }, 00:44:13.672 "driver_specific": { 00:44:13.672 "lvol": { 00:44:13.672 "lvol_store_uuid": "301a8f3f-17b8-4408-8a05-e317da6c2e3e", 00:44:13.672 "base_bdev": "nvme0n1", 00:44:13.672 "thin_provision": true, 00:44:13.673 "num_allocated_clusters": 0, 00:44:13.673 "snapshot": false, 00:44:13.673 "clone": false, 00:44:13.673 "esnap_clone": false 00:44:13.673 } 00:44:13.673 } 00:44:13.673 } 00:44:13.673 ]' 00:44:13.673 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:13.673 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:44:13.673 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:13.931 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:13.931 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:13.931 23:25:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 77843652-a46e-498d-9c05-a4ebb9f89e77 --l2p_dram_limit 10' 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:44:13.931 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:44:13.931 23:25:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 77843652-a46e-498d-9c05-a4ebb9f89e77 --l2p_dram_limit 10 -c nvc0n1p0 00:44:13.931 [2024-12-09 23:25:54.552786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.553099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:13.931 [2024-12-09 23:25:54.553124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:13.931 [2024-12-09 23:25:54.553132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.553201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.553210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:13.931 [2024-12-09 23:25:54.553218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:44:13.931 [2024-12-09 23:25:54.553225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.553247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:13.931 [2024-12-09 23:25:54.553956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:13.931 [2024-12-09 23:25:54.553993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.554000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:13.931 [2024-12-09 23:25:54.554010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:44:13.931 [2024-12-09 23:25:54.554016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.554045] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6496dd5f-945a-4404-a378-a98a70535383 00:44:13.931 [2024-12-09 23:25:54.555349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.555380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:13.931 [2024-12-09 23:25:54.555389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:44:13.931 [2024-12-09 23:25:54.555398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.562360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.562392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:13.931 [2024-12-09 23:25:54.562402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:44:13.931 [2024-12-09 23:25:54.562410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.562485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.562494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:13.931 [2024-12-09 23:25:54.562502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:44:13.931 [2024-12-09 23:25:54.562513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.562555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.931 [2024-12-09 23:25:54.562566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:13.931 [2024-12-09 23:25:54.562574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:13.931 [2024-12-09 23:25:54.562582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.931 [2024-12-09 23:25:54.562600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:14.190 [2024-12-09 23:25:54.565934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:14.190 [2024-12-09 23:25:54.565958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:14.190 [2024-12-09 23:25:54.565969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:44:14.190 [2024-12-09 23:25:54.565976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:14.190 [2024-12-09 23:25:54.566017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:14.190 [2024-12-09 23:25:54.566024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:14.190 [2024-12-09 23:25:54.566032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:14.190 [2024-12-09 23:25:54.566039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:14.190 [2024-12-09 23:25:54.566059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:14.190 [2024-12-09 23:25:54.566178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:14.190 [2024-12-09 23:25:54.566192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:14.190 [2024-12-09 23:25:54.566201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:14.190 [2024-12-09 23:25:54.566212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:14.190 [2024-12-09 23:25:54.566220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:14.190 [2024-12-09 23:25:54.566229] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:14.190 [2024-12-09 23:25:54.566235] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:14.190 [2024-12-09 23:25:54.566246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:14.190 [2024-12-09 23:25:54.566251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:14.190 [2024-12-09 23:25:54.566261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:14.190 [2024-12-09 23:25:54.566274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:14.190 [2024-12-09 23:25:54.566282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:44:14.190 [2024-12-09 23:25:54.566288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:14.190 [2024-12-09 23:25:54.566356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:14.190 [2024-12-09 23:25:54.566364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:14.190 [2024-12-09 23:25:54.566371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:44:14.190 [2024-12-09 23:25:54.566377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:14.190 [2024-12-09 23:25:54.566457] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:14.190 [2024-12-09 23:25:54.566466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:14.190 [2024-12-09 23:25:54.566474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:14.191 [2024-12-09 23:25:54.566494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:14.191 [2024-12-09 23:25:54.566513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:14.191 [2024-12-09 23:25:54.566525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:14.191 [2024-12-09 23:25:54.566532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:14.191 [2024-12-09 23:25:54.566540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:14.191 [2024-12-09 23:25:54.566546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:14.191 [2024-12-09 23:25:54.566553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:14.191 [2024-12-09 23:25:54.566558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:14.191 [2024-12-09 23:25:54.566575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:14.191 [2024-12-09 23:25:54.566594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:14.191 [2024-12-09 23:25:54.566612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:14.191 [2024-12-09 23:25:54.566631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:14.191 [2024-12-09 23:25:54.566648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:14.191 [2024-12-09 23:25:54.566669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:14.191 [2024-12-09 23:25:54.566681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:14.191 [2024-12-09 23:25:54.566685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:14.191 [2024-12-09 23:25:54.566692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:14.191 [2024-12-09 23:25:54.566697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:14.191 [2024-12-09 23:25:54.566705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:14.191 [2024-12-09 23:25:54.566711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:14.191 [2024-12-09 23:25:54.566723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:14.191 [2024-12-09 23:25:54.566730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566735] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:14.191 [2024-12-09 23:25:54.566742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:14.191 [2024-12-09 23:25:54.566748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:14.191 [2024-12-09 23:25:54.566763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:14.191 [2024-12-09 23:25:54.566774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:14.191 [2024-12-09 23:25:54.566779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:14.191 [2024-12-09 23:25:54.566786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:14.191 [2024-12-09 23:25:54.566791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:14.191 [2024-12-09 23:25:54.566798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:14.191 [2024-12-09 23:25:54.566805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:14.191 [2024-12-09 23:25:54.566817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:14.191 [2024-12-09 23:25:54.566831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:14.191 [2024-12-09 23:25:54.566836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:14.191 [2024-12-09 23:25:54.566844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:14.191 [2024-12-09 23:25:54.566850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:14.191 [2024-12-09 23:25:54.566857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:14.191 [2024-12-09 23:25:54.566863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:14.191 [2024-12-09 23:25:54.566870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:14.191 [2024-12-09 23:25:54.566875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:14.191 [2024-12-09 23:25:54.566884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:14.191 [2024-12-09 23:25:54.566915] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:14.191 [2024-12-09 23:25:54.566924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:14.191 [2024-12-09 23:25:54.566937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:14.191 [2024-12-09 23:25:54.566942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:14.191 [2024-12-09 23:25:54.566951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:14.191 [2024-12-09 23:25:54.566957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:14.191 [2024-12-09 23:25:54.566964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:14.191 [2024-12-09 23:25:54.566969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:44:14.191 [2024-12-09 23:25:54.566977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:14.191 [2024-12-09 23:25:54.567031] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:14.191 [2024-12-09 23:25:54.567044] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:16.718 [2024-12-09 23:25:57.272997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.273230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:16.718 [2024-12-09 23:25:57.273298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2705.938 ms 00:44:16.718 [2024-12-09 23:25:57.273326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.301698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.301856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:16.718 [2024-12-09 23:25:57.301915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.049 ms 00:44:16.718 [2024-12-09 23:25:57.301941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.302092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.302171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:16.718 [2024-12-09 23:25:57.302197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:44:16.718 [2024-12-09 23:25:57.302224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.335024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.335163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:16.718 [2024-12-09 23:25:57.335220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.516 ms 00:44:16.718 [2024-12-09 23:25:57.335246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.335284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.335313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:16.718 [2024-12-09 23:25:57.335333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:44:16.718 [2024-12-09 23:25:57.335362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.335796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.335895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:16.718 [2024-12-09 23:25:57.335952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:44:16.718 [2024-12-09 23:25:57.335978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.336110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.336134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:16.718 [2024-12-09 23:25:57.336157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:44:16.718 [2024-12-09 23:25:57.336180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.718 [2024-12-09 23:25:57.351724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.718 [2024-12-09 23:25:57.351834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:16.718 [2024-12-09 23:25:57.351886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.516 ms 00:44:16.718 [2024-12-09 23:25:57.351911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.383968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:16.976 [2024-12-09 23:25:57.387317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.387417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:16.976 [2024-12-09 23:25:57.387469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.306 ms 00:44:16.976 [2024-12-09 23:25:57.387492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.457559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.457687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:16.976 [2024-12-09 23:25:57.457744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.017 ms 00:44:16.976 [2024-12-09 23:25:57.457768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.458008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.458042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:16.976 [2024-12-09 23:25:57.458098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:44:16.976 [2024-12-09 23:25:57.458120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.481341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.481443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:16.976 [2024-12-09 23:25:57.481494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:44:16.976 [2024-12-09 23:25:57.481517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.504418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.504518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:16.976 [2024-12-09 23:25:57.504580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.821 ms 00:44:16.976 [2024-12-09 23:25:57.504600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.505204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.505283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:16.976 [2024-12-09 23:25:57.505333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:44:16.976 [2024-12-09 23:25:57.505358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.576342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.576449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:16.976 [2024-12-09 23:25:57.576520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.937 ms 00:44:16.976 [2024-12-09 23:25:57.576544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:16.976 [2024-12-09 23:25:57.601692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:16.976 [2024-12-09 23:25:57.601804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:16.976 [2024-12-09 23:25:57.601867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.070 ms 00:44:16.976 [2024-12-09 23:25:57.601890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.234 [2024-12-09 23:25:57.625552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.234 [2024-12-09 23:25:57.625586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:17.234 [2024-12-09 23:25:57.625599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.608 ms 00:44:17.234 [2024-12-09 23:25:57.625607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.234 [2024-12-09 23:25:57.649109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.234 [2024-12-09 23:25:57.649220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:17.234 [2024-12-09 23:25:57.649239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.465 ms 00:44:17.234 [2024-12-09 23:25:57.649248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.234 [2024-12-09 23:25:57.649284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.234 [2024-12-09 23:25:57.649294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:17.234 [2024-12-09 23:25:57.649308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:17.234 [2024-12-09 23:25:57.649315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.234 [2024-12-09 23:25:57.649394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.234 [2024-12-09 23:25:57.649406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:17.234 [2024-12-09 23:25:57.649417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:44:17.234 [2024-12-09 23:25:57.649425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.234 [2024-12-09 23:25:57.650400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3097.144 ms, result 0 00:44:17.234 { 00:44:17.234 "name": "ftl0", 00:44:17.234 "uuid": "6496dd5f-945a-4404-a378-a98a70535383" 00:44:17.234 } 00:44:17.234 23:25:57 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:44:17.234 23:25:57 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:17.492 23:25:57 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:44:17.492 23:25:57 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:17.492 [2024-12-09 23:25:58.057834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.057876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:17.492 [2024-12-09 23:25:58.057887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:44:17.492 [2024-12-09 23:25:58.057897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.057919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:17.492 [2024-12-09 23:25:58.060707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.060820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:17.492 [2024-12-09 23:25:58.060839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:44:17.492 [2024-12-09 23:25:58.060848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.061135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.061149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:17.492 [2024-12-09 23:25:58.061159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:44:17.492 [2024-12-09 23:25:58.061168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.064418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.064439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:17.492 [2024-12-09 23:25:58.064450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:44:17.492 [2024-12-09 23:25:58.064459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.070575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.070610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:17.492 [2024-12-09 23:25:58.070626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:44:17.492 [2024-12-09 23:25:58.070634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.093844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.093872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:17.492 [2024-12-09 23:25:58.093885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.155 ms 00:44:17.492 [2024-12-09 23:25:58.093892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.109405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.109517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:17.492 [2024-12-09 23:25:58.109538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.474 ms 00:44:17.492 [2024-12-09 23:25:58.109546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.492 [2024-12-09 23:25:58.109715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.492 [2024-12-09 23:25:58.109727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:17.492 [2024-12-09 23:25:58.109738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:44:17.492 [2024-12-09 23:25:58.109746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.752 [2024-12-09 23:25:58.133096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.752 [2024-12-09 23:25:58.133124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:17.752 [2024-12-09 23:25:58.133137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.329 ms 00:44:17.752 [2024-12-09 23:25:58.133144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.752 [2024-12-09 23:25:58.155835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.752 [2024-12-09 23:25:58.155862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:17.752 [2024-12-09 23:25:58.155873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.656 ms 00:44:17.752 [2024-12-09 23:25:58.155880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.752 [2024-12-09 23:25:58.178160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.752 [2024-12-09 23:25:58.178189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:17.752 [2024-12-09 23:25:58.178200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.245 ms 00:44:17.752 [2024-12-09 23:25:58.178208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.752 [2024-12-09 23:25:58.200707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.752 [2024-12-09 23:25:58.200735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:17.752 [2024-12-09 23:25:58.200747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.434 ms 00:44:17.752 [2024-12-09 23:25:58.200754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.752 [2024-12-09 23:25:58.200787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:17.752 [2024-12-09 23:25:58.200800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.200977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:17.752 [2024-12-09 23:25:58.201444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:17.753 [2024-12-09 23:25:58.201715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:17.753 [2024-12-09 23:25:58.201725] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6496dd5f-945a-4404-a378-a98a70535383 00:44:17.753 [2024-12-09 23:25:58.201734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:17.753 [2024-12-09 23:25:58.201745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:17.753 [2024-12-09 23:25:58.201754] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:17.753 [2024-12-09 23:25:58.201763] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:17.753 [2024-12-09 23:25:58.201770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:17.753 [2024-12-09 23:25:58.201779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:17.753 [2024-12-09 23:25:58.201787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:17.753 [2024-12-09 23:25:58.201795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:17.753 [2024-12-09 23:25:58.201801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:17.753 [2024-12-09 23:25:58.201810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.753 [2024-12-09 23:25:58.201818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:17.753 [2024-12-09 23:25:58.201827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:44:17.753 [2024-12-09 23:25:58.201836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.214369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.753 [2024-12-09 23:25:58.214396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:17.753 [2024-12-09 23:25:58.214407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.502 ms 00:44:17.753 [2024-12-09 23:25:58.214415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.214768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:17.753 [2024-12-09 23:25:58.214788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:17.753 [2024-12-09 23:25:58.214801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:44:17.753 [2024-12-09 23:25:58.214809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.258240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:17.753 [2024-12-09 23:25:58.258272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:17.753 [2024-12-09 23:25:58.258285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:17.753 [2024-12-09 23:25:58.258293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.258356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:17.753 [2024-12-09 23:25:58.258366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:17.753 [2024-12-09 23:25:58.258379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:17.753 [2024-12-09 23:25:58.258386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.258459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:17.753 [2024-12-09 23:25:58.258470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:17.753 [2024-12-09 23:25:58.258481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:17.753 [2024-12-09 23:25:58.258488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.258509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:17.753 [2024-12-09 23:25:58.258518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:17.753 [2024-12-09 23:25:58.258528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:17.753 [2024-12-09 23:25:58.258538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:17.753 [2024-12-09 23:25:58.338215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:17.753 [2024-12-09 23:25:58.338260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:17.753 [2024-12-09 23:25:58.338273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:17.753 [2024-12-09 23:25:58.338281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:18.012 [2024-12-09 23:25:58.400103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:18.012 [2024-12-09 23:25:58.400220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:18.012 [2024-12-09 23:25:58.400285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:18.012 [2024-12-09 23:25:58.400387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:18.012 [2024-12-09 23:25:58.400441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:18.012 [2024-12-09 23:25:58.400501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:18.012 [2024-12-09 23:25:58.400560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:18.012 [2024-12-09 23:25:58.400569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:18.012 [2024-12-09 23:25:58.400574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:18.012 [2024-12-09 23:25:58.400697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.828 ms, result 0 00:44:18.012 true 00:44:18.012 23:25:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77362 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77362 ']' 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77362 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77362 00:44:18.012 killing process with pid 77362 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77362' 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77362 00:44:18.012 23:25:58 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77362 00:44:23.295 23:26:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:44:27.513 262144+0 records in 00:44:27.513 262144+0 records out 00:44:27.514 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.29095 s, 250 MB/s 00:44:27.514 23:26:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:44:30.061 23:26:10 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:30.061 [2024-12-09 23:26:10.144947] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:30.061 [2024-12-09 23:26:10.145055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77594 ] 00:44:30.061 [2024-12-09 23:26:10.301937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.061 [2024-12-09 23:26:10.444181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:30.322 [2024-12-09 23:26:10.779267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:30.322 [2024-12-09 23:26:10.779375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:30.322 [2024-12-09 23:26:10.940761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.322 [2024-12-09 23:26:10.940829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:30.322 [2024-12-09 23:26:10.940845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:30.322 [2024-12-09 23:26:10.940855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.322 [2024-12-09 23:26:10.940916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.322 [2024-12-09 23:26:10.940931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:30.322 [2024-12-09 23:26:10.940940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:44:30.322 [2024-12-09 23:26:10.940948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.322 [2024-12-09 23:26:10.940971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:30.322 [2024-12-09 23:26:10.941829] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:30.322 [2024-12-09 23:26:10.941863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.322 [2024-12-09 23:26:10.941872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:30.322 [2024-12-09 23:26:10.941882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:44:30.322 [2024-12-09 23:26:10.941891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.322 [2024-12-09 23:26:10.944269] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:30.584 [2024-12-09 23:26:10.960356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.960412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:30.584 [2024-12-09 23:26:10.960426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.087 ms 00:44:30.584 [2024-12-09 23:26:10.960435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.960524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.960536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:30.584 [2024-12-09 23:26:10.960546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:44:30.584 [2024-12-09 23:26:10.960554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.971976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.972034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:30.584 [2024-12-09 23:26:10.972047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.339 ms 00:44:30.584 [2024-12-09 23:26:10.972062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.972151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.972160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:30.584 [2024-12-09 23:26:10.972172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:30.584 [2024-12-09 23:26:10.972180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.972240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.972253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:30.584 [2024-12-09 23:26:10.972262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:30.584 [2024-12-09 23:26:10.972271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.972299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:30.584 [2024-12-09 23:26:10.976914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.976955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:30.584 [2024-12-09 23:26:10.976970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.620 ms 00:44:30.584 [2024-12-09 23:26:10.977006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.977051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.977061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:30.584 [2024-12-09 23:26:10.977071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:44:30.584 [2024-12-09 23:26:10.977081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.977120] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:30.584 [2024-12-09 23:26:10.977151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:30.584 [2024-12-09 23:26:10.977194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:30.584 [2024-12-09 23:26:10.977215] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:30.584 [2024-12-09 23:26:10.977329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:30.584 [2024-12-09 23:26:10.977343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:30.584 [2024-12-09 23:26:10.977356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:30.584 [2024-12-09 23:26:10.977367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:30.584 [2024-12-09 23:26:10.977395] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:30.584 [2024-12-09 23:26:10.977407] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:30.584 [2024-12-09 23:26:10.977415] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:30.584 [2024-12-09 23:26:10.977425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.977434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:30.584 [2024-12-09 23:26:10.977443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:44:30.584 [2024-12-09 23:26:10.977450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.977533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.584 [2024-12-09 23:26:10.977551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:30.584 [2024-12-09 23:26:10.977558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:44:30.584 [2024-12-09 23:26:10.977566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.584 [2024-12-09 23:26:10.977712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:30.584 [2024-12-09 23:26:10.977730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:30.584 [2024-12-09 23:26:10.977743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:30.584 [2024-12-09 23:26:10.977781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:30.584 [2024-12-09 23:26:10.977806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:30.584 [2024-12-09 23:26:10.977821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:30.584 [2024-12-09 23:26:10.977828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:30.584 [2024-12-09 23:26:10.977835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:30.584 [2024-12-09 23:26:10.977850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:30.584 [2024-12-09 23:26:10.977857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:30.584 [2024-12-09 23:26:10.977864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:30.584 [2024-12-09 23:26:10.977877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:30.584 [2024-12-09 23:26:10.977897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:30.584 [2024-12-09 23:26:10.977918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:30.584 [2024-12-09 23:26:10.977938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:30.584 [2024-12-09 23:26:10.977959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:30.584 [2024-12-09 23:26:10.977965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:30.584 [2024-12-09 23:26:10.977972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:30.584 [2024-12-09 23:26:10.977979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:30.584 [2024-12-09 23:26:10.978262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:30.584 [2024-12-09 23:26:10.978283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:30.584 [2024-12-09 23:26:10.978303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:30.584 [2024-12-09 23:26:10.978322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:30.584 [2024-12-09 23:26:10.978341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:30.584 [2024-12-09 23:26:10.978360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:30.584 [2024-12-09 23:26:10.978381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.978400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:30.584 [2024-12-09 23:26:10.978418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:30.584 [2024-12-09 23:26:10.978436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.584 [2024-12-09 23:26:10.978454] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:30.584 [2024-12-09 23:26:10.978474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:30.585 [2024-12-09 23:26:10.978493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:30.585 [2024-12-09 23:26:10.978511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:30.585 [2024-12-09 23:26:10.978530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:30.585 [2024-12-09 23:26:10.978548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:30.585 [2024-12-09 23:26:10.978566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:30.585 [2024-12-09 23:26:10.978583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:30.585 [2024-12-09 23:26:10.978675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:30.585 [2024-12-09 23:26:10.978687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:30.585 [2024-12-09 23:26:10.978698] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:30.585 [2024-12-09 23:26:10.978710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:30.585 [2024-12-09 23:26:10.978735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:30.585 [2024-12-09 23:26:10.978743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:30.585 [2024-12-09 23:26:10.978751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:30.585 [2024-12-09 23:26:10.978759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:30.585 [2024-12-09 23:26:10.978766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:30.585 [2024-12-09 23:26:10.978775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:30.585 [2024-12-09 23:26:10.978783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:30.585 [2024-12-09 23:26:10.978791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:30.585 [2024-12-09 23:26:10.978798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:30.585 [2024-12-09 23:26:10.978839] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:30.585 [2024-12-09 23:26:10.978849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:30.585 [2024-12-09 23:26:10.978866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:30.585 [2024-12-09 23:26:10.978874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:30.585 [2024-12-09 23:26:10.978882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:30.585 [2024-12-09 23:26:10.978892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:10.978900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:30.585 [2024-12-09 23:26:10.978909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:44:30.585 [2024-12-09 23:26:10.978917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.017258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.017306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:30.585 [2024-12-09 23:26:11.017319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.257 ms 00:44:30.585 [2024-12-09 23:26:11.017333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.017428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.017440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:30.585 [2024-12-09 23:26:11.017449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:30.585 [2024-12-09 23:26:11.017458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.065924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.065974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:30.585 [2024-12-09 23:26:11.066002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.403 ms 00:44:30.585 [2024-12-09 23:26:11.066012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.066062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.066073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:30.585 [2024-12-09 23:26:11.066087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:30.585 [2024-12-09 23:26:11.066097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.066832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.066858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:30.585 [2024-12-09 23:26:11.066872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:44:30.585 [2024-12-09 23:26:11.066881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.067087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.067101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:30.585 [2024-12-09 23:26:11.067118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:44:30.585 [2024-12-09 23:26:11.067128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.085801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.085904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:30.585 [2024-12-09 23:26:11.085917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.649 ms 00:44:30.585 [2024-12-09 23:26:11.085926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.100956] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:44:30.585 [2024-12-09 23:26:11.101009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:30.585 [2024-12-09 23:26:11.101024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.101034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:30.585 [2024-12-09 23:26:11.101045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.962 ms 00:44:30.585 [2024-12-09 23:26:11.101054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.127448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.127500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:30.585 [2024-12-09 23:26:11.127512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.340 ms 00:44:30.585 [2024-12-09 23:26:11.127521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.141002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.141051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:30.585 [2024-12-09 23:26:11.141066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.425 ms 00:44:30.585 [2024-12-09 23:26:11.141075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.153472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.153513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:30.585 [2024-12-09 23:26:11.153526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.339 ms 00:44:30.585 [2024-12-09 23:26:11.153534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.585 [2024-12-09 23:26:11.154225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.585 [2024-12-09 23:26:11.154247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:30.585 [2024-12-09 23:26:11.154257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:44:30.585 [2024-12-09 23:26:11.154270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.226445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.226509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:30.847 [2024-12-09 23:26:11.226527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.150 ms 00:44:30.847 [2024-12-09 23:26:11.226544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.239296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:30.847 [2024-12-09 23:26:11.243135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.243175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:30.847 [2024-12-09 23:26:11.243189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.525 ms 00:44:30.847 [2024-12-09 23:26:11.243198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.243298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.243312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:30.847 [2024-12-09 23:26:11.243323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:44:30.847 [2024-12-09 23:26:11.243333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.243417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.243429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:30.847 [2024-12-09 23:26:11.243439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:44:30.847 [2024-12-09 23:26:11.243447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.243470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.243479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:30.847 [2024-12-09 23:26:11.243488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:30.847 [2024-12-09 23:26:11.243497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.243538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:30.847 [2024-12-09 23:26:11.243553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.243563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:30.847 [2024-12-09 23:26:11.243572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:44:30.847 [2024-12-09 23:26:11.243581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.270016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.270057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:30.847 [2024-12-09 23:26:11.270071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.414 ms 00:44:30.847 [2024-12-09 23:26:11.270087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.270175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.847 [2024-12-09 23:26:11.270188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:30.847 [2024-12-09 23:26:11.270198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:44:30.847 [2024-12-09 23:26:11.270207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.847 [2024-12-09 23:26:11.271681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.363 ms, result 0 00:44:31.862  [2024-12-09T23:26:13.440Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-09T23:26:14.380Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-09T23:26:15.325Z] Copying: 54/1024 [MB] (21 MBps) [2024-12-09T23:26:16.705Z] Copying: 78/1024 [MB] (23 MBps) [2024-12-09T23:26:17.638Z] Copying: 96/1024 [MB] (18 MBps) [2024-12-09T23:26:18.573Z] Copying: 107/1024 [MB] (11 MBps) [2024-12-09T23:26:19.508Z] Copying: 118/1024 [MB] (10 MBps) [2024-12-09T23:26:20.442Z] Copying: 128/1024 [MB] (10 MBps) [2024-12-09T23:26:21.376Z] Copying: 140/1024 [MB] (11 MBps) [2024-12-09T23:26:22.310Z] Copying: 151/1024 [MB] (11 MBps) [2024-12-09T23:26:23.690Z] Copying: 162/1024 [MB] (11 MBps) [2024-12-09T23:26:24.624Z] Copying: 173/1024 [MB] (10 MBps) [2024-12-09T23:26:25.556Z] Copying: 185/1024 [MB] (11 MBps) [2024-12-09T23:26:26.549Z] Copying: 196/1024 [MB] (11 MBps) [2024-12-09T23:26:27.492Z] Copying: 207/1024 [MB] (10 MBps) [2024-12-09T23:26:28.433Z] Copying: 224/1024 [MB] (17 MBps) [2024-12-09T23:26:29.367Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-09T23:26:30.309Z] Copying: 247/1024 [MB] (12 MBps) [2024-12-09T23:26:31.683Z] Copying: 258/1024 [MB] (11 MBps) [2024-12-09T23:26:32.617Z] Copying: 271/1024 [MB] (12 MBps) [2024-12-09T23:26:33.558Z] Copying: 283/1024 [MB] (11 MBps) [2024-12-09T23:26:34.493Z] Copying: 294/1024 [MB] (11 MBps) [2024-12-09T23:26:35.428Z] Copying: 304/1024 [MB] (10 MBps) [2024-12-09T23:26:36.371Z] Copying: 315/1024 [MB] (10 MBps) [2024-12-09T23:26:37.314Z] Copying: 326/1024 [MB] (10 MBps) [2024-12-09T23:26:38.700Z] Copying: 343136/1048576 [kB] (9076 kBps) [2024-12-09T23:26:39.647Z] Copying: 352560/1048576 [kB] (9424 kBps) [2024-12-09T23:26:40.650Z] Copying: 362748/1048576 [kB] (10188 kBps) [2024-12-09T23:26:41.590Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-09T23:26:42.524Z] Copying: 383088/1048576 [kB] (10084 kBps) [2024-12-09T23:26:43.461Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-09T23:26:44.406Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-09T23:26:45.346Z] Copying: 405/1024 [MB] (10 MBps) [2024-12-09T23:26:46.420Z] Copying: 425424/1048576 [kB] (9928 kBps) [2024-12-09T23:26:47.355Z] Copying: 425/1024 [MB] (10 MBps) [2024-12-09T23:26:48.289Z] Copying: 436/1024 [MB] (11 MBps) [2024-12-09T23:26:49.673Z] Copying: 448/1024 [MB] (11 MBps) [2024-12-09T23:26:50.616Z] Copying: 472/1024 [MB] (24 MBps) [2024-12-09T23:26:51.560Z] Copying: 507/1024 [MB] (35 MBps) [2024-12-09T23:26:52.503Z] Copying: 548/1024 [MB] (41 MBps) [2024-12-09T23:26:53.507Z] Copying: 592/1024 [MB] (43 MBps) [2024-12-09T23:26:54.449Z] Copying: 614/1024 [MB] (22 MBps) [2024-12-09T23:26:55.391Z] Copying: 628/1024 [MB] (13 MBps) [2024-12-09T23:26:56.331Z] Copying: 644/1024 [MB] (16 MBps) [2024-12-09T23:26:57.711Z] Copying: 661/1024 [MB] (16 MBps) [2024-12-09T23:26:58.647Z] Copying: 677/1024 [MB] (16 MBps) [2024-12-09T23:26:59.579Z] Copying: 693/1024 [MB] (15 MBps) [2024-12-09T23:27:00.519Z] Copying: 706/1024 [MB] (12 MBps) [2024-12-09T23:27:01.457Z] Copying: 719/1024 [MB] (13 MBps) [2024-12-09T23:27:02.397Z] Copying: 742/1024 [MB] (22 MBps) [2024-12-09T23:27:03.338Z] Copying: 760/1024 [MB] (18 MBps) [2024-12-09T23:27:04.721Z] Copying: 773/1024 [MB] (12 MBps) [2024-12-09T23:27:05.289Z] Copying: 790/1024 [MB] (16 MBps) [2024-12-09T23:27:06.670Z] Copying: 802/1024 [MB] (12 MBps) [2024-12-09T23:27:07.608Z] Copying: 817/1024 [MB] (14 MBps) [2024-12-09T23:27:08.543Z] Copying: 829/1024 [MB] (11 MBps) [2024-12-09T23:27:09.588Z] Copying: 842/1024 [MB] (13 MBps) [2024-12-09T23:27:10.532Z] Copying: 856/1024 [MB] (13 MBps) [2024-12-09T23:27:11.478Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-09T23:27:12.415Z] Copying: 877/1024 [MB] (10 MBps) [2024-12-09T23:27:13.359Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-09T23:27:14.303Z] Copying: 898/1024 [MB] (10 MBps) [2024-12-09T23:27:15.686Z] Copying: 930408/1048576 [kB] (9964 kBps) [2024-12-09T23:27:16.623Z] Copying: 919/1024 [MB] (10 MBps) [2024-12-09T23:27:17.566Z] Copying: 930/1024 [MB] (11 MBps) [2024-12-09T23:27:18.500Z] Copying: 940/1024 [MB] (10 MBps) [2024-12-09T23:27:19.436Z] Copying: 951/1024 [MB] (11 MBps) [2024-12-09T23:27:20.372Z] Copying: 963/1024 [MB] (11 MBps) [2024-12-09T23:27:21.310Z] Copying: 974/1024 [MB] (11 MBps) [2024-12-09T23:27:22.685Z] Copying: 985/1024 [MB] (10 MBps) [2024-12-09T23:27:23.620Z] Copying: 997/1024 [MB] (11 MBps) [2024-12-09T23:27:24.557Z] Copying: 1008/1024 [MB] (11 MBps) [2024-12-09T23:27:24.818Z] Copying: 1019/1024 [MB] (11 MBps) [2024-12-09T23:27:24.818Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-09 23:27:24.690765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.690826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:44.183 [2024-12-09 23:27:24.690842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:44.183 [2024-12-09 23:27:24.690850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.690871] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:44.183 [2024-12-09 23:27:24.693777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.693809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:44.183 [2024-12-09 23:27:24.693825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:45:44.183 [2024-12-09 23:27:24.693834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.696420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.696449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:44.183 [2024-12-09 23:27:24.696460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:45:44.183 [2024-12-09 23:27:24.696468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.712712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.712756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:44.183 [2024-12-09 23:27:24.712767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.228 ms 00:45:44.183 [2024-12-09 23:27:24.712775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.718864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.718893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:44.183 [2024-12-09 23:27:24.718904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.053 ms 00:45:44.183 [2024-12-09 23:27:24.718911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.744491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.744529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:44.183 [2024-12-09 23:27:24.744540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.531 ms 00:45:44.183 [2024-12-09 23:27:24.744548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.760416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.760456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:44.183 [2024-12-09 23:27:24.760467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:45:44.183 [2024-12-09 23:27:24.760475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.760611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.760626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:44.183 [2024-12-09 23:27:24.760636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:45:44.183 [2024-12-09 23:27:24.760644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.785483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.785524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:44.183 [2024-12-09 23:27:24.785535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.824 ms 00:45:44.183 [2024-12-09 23:27:24.785544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.183 [2024-12-09 23:27:24.811092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.183 [2024-12-09 23:27:24.811137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:44.183 [2024-12-09 23:27:24.811148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:45:44.183 [2024-12-09 23:27:24.811156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.445 [2024-12-09 23:27:24.835918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.446 [2024-12-09 23:27:24.835964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:44.446 [2024-12-09 23:27:24.835977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.716 ms 00:45:44.446 [2024-12-09 23:27:24.835997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.446 [2024-12-09 23:27:24.861048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.446 [2024-12-09 23:27:24.861094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:44.446 [2024-12-09 23:27:24.861105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.973 ms 00:45:44.446 [2024-12-09 23:27:24.861113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.446 [2024-12-09 23:27:24.861158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:44.446 [2024-12-09 23:27:24.861177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:44.446 [2024-12-09 23:27:24.861852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.861979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.862004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.862012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.862021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:44.447 [2024-12-09 23:27:24.862038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:44.447 [2024-12-09 23:27:24.862051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6496dd5f-945a-4404-a378-a98a70535383 00:45:44.447 [2024-12-09 23:27:24.862060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:44.447 [2024-12-09 23:27:24.862067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:44.447 [2024-12-09 23:27:24.862075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:44.447 [2024-12-09 23:27:24.862084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:44.447 [2024-12-09 23:27:24.862091] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:44.447 [2024-12-09 23:27:24.862576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:44.447 [2024-12-09 23:27:24.862584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:44.447 [2024-12-09 23:27:24.862591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:44.447 [2024-12-09 23:27:24.862598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:44.447 [2024-12-09 23:27:24.862606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.447 [2024-12-09 23:27:24.862614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:44.447 [2024-12-09 23:27:24.862624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:45:44.447 [2024-12-09 23:27:24.862632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.877517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.447 [2024-12-09 23:27:24.877562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:44.447 [2024-12-09 23:27:24.877573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.846 ms 00:45:44.447 [2024-12-09 23:27:24.877582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.878069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.447 [2024-12-09 23:27:24.878098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:44.447 [2024-12-09 23:27:24.878108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:45:44.447 [2024-12-09 23:27:24.878125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.917873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.447 [2024-12-09 23:27:24.917923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:44.447 [2024-12-09 23:27:24.917935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.447 [2024-12-09 23:27:24.917945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.918027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.447 [2024-12-09 23:27:24.918037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:44.447 [2024-12-09 23:27:24.918048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.447 [2024-12-09 23:27:24.918064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.918137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.447 [2024-12-09 23:27:24.918149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:44.447 [2024-12-09 23:27:24.918158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.447 [2024-12-09 23:27:24.918167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:24.918183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.447 [2024-12-09 23:27:24.918192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:44.447 [2024-12-09 23:27:24.918203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.447 [2024-12-09 23:27:24.918211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.447 [2024-12-09 23:27:25.008876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.447 [2024-12-09 23:27:25.008936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:44.447 [2024-12-09 23:27:25.008950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.447 [2024-12-09 23:27:25.008960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:44.708 [2024-12-09 23:27:25.083104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:44.708 [2024-12-09 23:27:25.083247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:44.708 [2024-12-09 23:27:25.083321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:44.708 [2024-12-09 23:27:25.083473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:44.708 [2024-12-09 23:27:25.083541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:44.708 [2024-12-09 23:27:25.083627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:44.708 [2024-12-09 23:27:25.083705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:44.708 [2024-12-09 23:27:25.083714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:44.708 [2024-12-09 23:27:25.083722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.708 [2024-12-09 23:27:25.083885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.067 ms, result 0 00:45:45.651 00:45:45.651 00:45:45.651 23:27:26 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:45:45.651 [2024-12-09 23:27:26.142875] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:45:45.651 [2024-12-09 23:27:26.143011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78376 ] 00:45:45.913 [2024-12-09 23:27:26.301658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.913 [2024-12-09 23:27:26.427031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:46.173 [2024-12-09 23:27:26.766730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:46.173 [2024-12-09 23:27:26.766826] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:46.434 [2024-12-09 23:27:26.931657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.931726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:46.434 [2024-12-09 23:27:26.931743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:46.434 [2024-12-09 23:27:26.931753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.931815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.931830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:46.434 [2024-12-09 23:27:26.931839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:45:46.434 [2024-12-09 23:27:26.931848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.931869] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:46.434 [2024-12-09 23:27:26.932609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:46.434 [2024-12-09 23:27:26.932641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.932650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:46.434 [2024-12-09 23:27:26.932659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:45:46.434 [2024-12-09 23:27:26.932668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.935005] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:46.434 [2024-12-09 23:27:26.950447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.950498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:46.434 [2024-12-09 23:27:26.950512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.467 ms 00:45:46.434 [2024-12-09 23:27:26.950522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.950609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.950620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:46.434 [2024-12-09 23:27:26.950631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:45:46.434 [2024-12-09 23:27:26.950641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.962233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.962273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:46.434 [2024-12-09 23:27:26.962286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.510 ms 00:45:46.434 [2024-12-09 23:27:26.962301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.962389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.962399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:46.434 [2024-12-09 23:27:26.962410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:45:46.434 [2024-12-09 23:27:26.962419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.962479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.962493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:46.434 [2024-12-09 23:27:26.962503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:46.434 [2024-12-09 23:27:26.962512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.962540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:46.434 [2024-12-09 23:27:26.967190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.967232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:46.434 [2024-12-09 23:27:26.967247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:45:46.434 [2024-12-09 23:27:26.967255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.967298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.967308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:46.434 [2024-12-09 23:27:26.967317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:45:46.434 [2024-12-09 23:27:26.967326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.967368] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:46.434 [2024-12-09 23:27:26.967397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:46.434 [2024-12-09 23:27:26.967442] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:46.434 [2024-12-09 23:27:26.967463] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:46.434 [2024-12-09 23:27:26.967578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:46.434 [2024-12-09 23:27:26.967591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:46.434 [2024-12-09 23:27:26.967605] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:46.434 [2024-12-09 23:27:26.967616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:46.434 [2024-12-09 23:27:26.967625] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:46.434 [2024-12-09 23:27:26.967634] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:46.434 [2024-12-09 23:27:26.967642] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:46.434 [2024-12-09 23:27:26.967653] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:46.434 [2024-12-09 23:27:26.967661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:46.434 [2024-12-09 23:27:26.967673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.967682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:46.434 [2024-12-09 23:27:26.967690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:45:46.434 [2024-12-09 23:27:26.967698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.967782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.434 [2024-12-09 23:27:26.967794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:46.434 [2024-12-09 23:27:26.967802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:45:46.434 [2024-12-09 23:27:26.967810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.434 [2024-12-09 23:27:26.967923] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:46.434 [2024-12-09 23:27:26.967943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:46.434 [2024-12-09 23:27:26.967953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:46.434 [2024-12-09 23:27:26.967962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.434 [2024-12-09 23:27:26.967971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:46.434 [2024-12-09 23:27:26.967978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:46.434 [2024-12-09 23:27:26.968028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:46.434 [2024-12-09 23:27:26.968044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:46.434 [2024-12-09 23:27:26.968051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:46.434 [2024-12-09 23:27:26.968059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:46.434 [2024-12-09 23:27:26.968074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:46.434 [2024-12-09 23:27:26.968082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:46.434 [2024-12-09 23:27:26.968089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:46.434 [2024-12-09 23:27:26.968104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:46.434 [2024-12-09 23:27:26.968125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:46.434 [2024-12-09 23:27:26.968148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:46.434 [2024-12-09 23:27:26.968170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:46.434 [2024-12-09 23:27:26.968191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:46.434 [2024-12-09 23:27:26.968199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:46.434 [2024-12-09 23:27:26.968206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:46.435 [2024-12-09 23:27:26.968214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:46.435 [2024-12-09 23:27:26.968221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:46.435 [2024-12-09 23:27:26.968227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:46.435 [2024-12-09 23:27:26.968234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:46.435 [2024-12-09 23:27:26.968240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:46.435 [2024-12-09 23:27:26.968248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:46.435 [2024-12-09 23:27:26.968255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:46.435 [2024-12-09 23:27:26.968265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.435 [2024-12-09 23:27:26.968273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:46.435 [2024-12-09 23:27:26.968280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:46.435 [2024-12-09 23:27:26.968287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.435 [2024-12-09 23:27:26.968293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:46.435 [2024-12-09 23:27:26.968302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:46.435 [2024-12-09 23:27:26.968310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:46.435 [2024-12-09 23:27:26.968318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:46.435 [2024-12-09 23:27:26.968326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:46.435 [2024-12-09 23:27:26.968332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:46.435 [2024-12-09 23:27:26.968339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:46.435 [2024-12-09 23:27:26.968346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:46.435 [2024-12-09 23:27:26.968353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:46.435 [2024-12-09 23:27:26.968359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:46.435 [2024-12-09 23:27:26.968367] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:46.435 [2024-12-09 23:27:26.968377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:46.435 [2024-12-09 23:27:26.968395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:46.435 [2024-12-09 23:27:26.968403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:46.435 [2024-12-09 23:27:26.968410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:46.435 [2024-12-09 23:27:26.968417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:46.435 [2024-12-09 23:27:26.968423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:46.435 [2024-12-09 23:27:26.968431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:46.435 [2024-12-09 23:27:26.968438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:46.435 [2024-12-09 23:27:26.968445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:46.435 [2024-12-09 23:27:26.968453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:46.435 [2024-12-09 23:27:26.968489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:46.435 [2024-12-09 23:27:26.968497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:46.435 [2024-12-09 23:27:26.968522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:46.435 [2024-12-09 23:27:26.968529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:46.435 [2024-12-09 23:27:26.968537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:46.435 [2024-12-09 23:27:26.968545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:26.968554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:46.435 [2024-12-09 23:27:26.968564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:45:46.435 [2024-12-09 23:27:26.968571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.006836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.006887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:46.435 [2024-12-09 23:27:27.006900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.213 ms 00:45:46.435 [2024-12-09 23:27:27.006914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.007026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.007037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:46.435 [2024-12-09 23:27:27.007048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:45:46.435 [2024-12-09 23:27:27.007056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.063149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.063206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:46.435 [2024-12-09 23:27:27.063220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.029 ms 00:45:46.435 [2024-12-09 23:27:27.063230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.063282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.063295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:46.435 [2024-12-09 23:27:27.063309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:46.435 [2024-12-09 23:27:27.063318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.064103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.064138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:46.435 [2024-12-09 23:27:27.064150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:45:46.435 [2024-12-09 23:27:27.064160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.435 [2024-12-09 23:27:27.064353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.435 [2024-12-09 23:27:27.064367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:46.435 [2024-12-09 23:27:27.064383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:45:46.435 [2024-12-09 23:27:27.064391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.082609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.082658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:46.697 [2024-12-09 23:27:27.082670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.194 ms 00:45:46.697 [2024-12-09 23:27:27.082680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.097946] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:46.697 [2024-12-09 23:27:27.098007] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:46.697 [2024-12-09 23:27:27.098021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.098031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:46.697 [2024-12-09 23:27:27.098042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.221 ms 00:45:46.697 [2024-12-09 23:27:27.098051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.124598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.124648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:46.697 [2024-12-09 23:27:27.124662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.491 ms 00:45:46.697 [2024-12-09 23:27:27.124672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.137767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.137815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:46.697 [2024-12-09 23:27:27.137828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.027 ms 00:45:46.697 [2024-12-09 23:27:27.137836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.150543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.150589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:46.697 [2024-12-09 23:27:27.150602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:45:46.697 [2024-12-09 23:27:27.150611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.151286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.151315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:46.697 [2024-12-09 23:27:27.151330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:45:46.697 [2024-12-09 23:27:27.151340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.224457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.224516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:46.697 [2024-12-09 23:27:27.224538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.096 ms 00:45:46.697 [2024-12-09 23:27:27.224549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.239697] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:46.697 [2024-12-09 23:27:27.243778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.243823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:46.697 [2024-12-09 23:27:27.243836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.167 ms 00:45:46.697 [2024-12-09 23:27:27.243846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.243942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.243956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:46.697 [2024-12-09 23:27:27.243970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:45:46.697 [2024-12-09 23:27:27.243980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.244087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.244101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:46.697 [2024-12-09 23:27:27.244111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:45:46.697 [2024-12-09 23:27:27.244120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.244146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.244157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:46.697 [2024-12-09 23:27:27.244166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:46.697 [2024-12-09 23:27:27.244175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.244222] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:46.697 [2024-12-09 23:27:27.244234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.244243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:46.697 [2024-12-09 23:27:27.244253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:45:46.697 [2024-12-09 23:27:27.244262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.270718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.270767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:46.697 [2024-12-09 23:27:27.270786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 00:45:46.697 [2024-12-09 23:27:27.270797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.270886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:46.697 [2024-12-09 23:27:27.270897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:46.697 [2024-12-09 23:27:27.270909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:45:46.697 [2024-12-09 23:27:27.270918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.697 [2024-12-09 23:27:27.272465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.205 ms, result 0 00:45:48.079  [2024-12-09T23:27:29.655Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-09T23:27:30.632Z] Copying: 25/1024 [MB] (14 MBps) [2024-12-09T23:27:31.574Z] Copying: 52/1024 [MB] (27 MBps) [2024-12-09T23:27:32.517Z] Copying: 93/1024 [MB] (41 MBps) [2024-12-09T23:27:33.897Z] Copying: 122/1024 [MB] (29 MBps) [2024-12-09T23:27:34.469Z] Copying: 139/1024 [MB] (17 MBps) [2024-12-09T23:27:35.850Z] Copying: 164/1024 [MB] (24 MBps) [2024-12-09T23:27:36.787Z] Copying: 188/1024 [MB] (24 MBps) [2024-12-09T23:27:37.731Z] Copying: 213/1024 [MB] (24 MBps) [2024-12-09T23:27:38.670Z] Copying: 257/1024 [MB] (43 MBps) [2024-12-09T23:27:39.607Z] Copying: 295/1024 [MB] (38 MBps) [2024-12-09T23:27:40.549Z] Copying: 339/1024 [MB] (44 MBps) [2024-12-09T23:27:41.489Z] Copying: 379/1024 [MB] (39 MBps) [2024-12-09T23:27:42.869Z] Copying: 401/1024 [MB] (21 MBps) [2024-12-09T23:27:43.813Z] Copying: 420/1024 [MB] (19 MBps) [2024-12-09T23:27:44.766Z] Copying: 442/1024 [MB] (21 MBps) [2024-12-09T23:27:45.706Z] Copying: 462/1024 [MB] (20 MBps) [2024-12-09T23:27:46.647Z] Copying: 487/1024 [MB] (24 MBps) [2024-12-09T23:27:47.589Z] Copying: 509/1024 [MB] (22 MBps) [2024-12-09T23:27:48.531Z] Copying: 527/1024 [MB] (17 MBps) [2024-12-09T23:27:49.473Z] Copying: 542/1024 [MB] (14 MBps) [2024-12-09T23:27:50.855Z] Copying: 560/1024 [MB] (18 MBps) [2024-12-09T23:27:51.795Z] Copying: 577/1024 [MB] (16 MBps) [2024-12-09T23:27:52.732Z] Copying: 594/1024 [MB] (17 MBps) [2024-12-09T23:27:53.676Z] Copying: 610/1024 [MB] (15 MBps) [2024-12-09T23:27:54.619Z] Copying: 621/1024 [MB] (11 MBps) [2024-12-09T23:27:55.557Z] Copying: 633/1024 [MB] (11 MBps) [2024-12-09T23:27:56.491Z] Copying: 644/1024 [MB] (11 MBps) [2024-12-09T23:27:57.916Z] Copying: 655/1024 [MB] (11 MBps) [2024-12-09T23:27:58.507Z] Copying: 667/1024 [MB] (11 MBps) [2024-12-09T23:27:59.894Z] Copying: 679/1024 [MB] (11 MBps) [2024-12-09T23:28:00.465Z] Copying: 690/1024 [MB] (10 MBps) [2024-12-09T23:28:01.846Z] Copying: 700/1024 [MB] (10 MBps) [2024-12-09T23:28:02.786Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-09T23:28:03.730Z] Copying: 721/1024 [MB] (10 MBps) [2024-12-09T23:28:04.671Z] Copying: 739/1024 [MB] (17 MBps) [2024-12-09T23:28:05.623Z] Copying: 750/1024 [MB] (10 MBps) [2024-12-09T23:28:06.565Z] Copying: 760/1024 [MB] (10 MBps) [2024-12-09T23:28:07.505Z] Copying: 774/1024 [MB] (13 MBps) [2024-12-09T23:28:08.885Z] Copying: 785/1024 [MB] (11 MBps) [2024-12-09T23:28:09.824Z] Copying: 796/1024 [MB] (11 MBps) [2024-12-09T23:28:10.765Z] Copying: 807/1024 [MB] (10 MBps) [2024-12-09T23:28:11.703Z] Copying: 818/1024 [MB] (10 MBps) [2024-12-09T23:28:12.641Z] Copying: 829/1024 [MB] (11 MBps) [2024-12-09T23:28:13.580Z] Copying: 840/1024 [MB] (10 MBps) [2024-12-09T23:28:14.520Z] Copying: 851/1024 [MB] (11 MBps) [2024-12-09T23:28:15.902Z] Copying: 862/1024 [MB] (10 MBps) [2024-12-09T23:28:16.474Z] Copying: 873/1024 [MB] (11 MBps) [2024-12-09T23:28:17.857Z] Copying: 884/1024 [MB] (10 MBps) [2024-12-09T23:28:18.799Z] Copying: 894/1024 [MB] (10 MBps) [2024-12-09T23:28:19.740Z] Copying: 911/1024 [MB] (16 MBps) [2024-12-09T23:28:20.681Z] Copying: 931/1024 [MB] (19 MBps) [2024-12-09T23:28:21.622Z] Copying: 948/1024 [MB] (17 MBps) [2024-12-09T23:28:22.566Z] Copying: 970/1024 [MB] (22 MBps) [2024-12-09T23:28:23.509Z] Copying: 990/1024 [MB] (20 MBps) [2024-12-09T23:28:24.452Z] Copying: 1008/1024 [MB] (18 MBps) [2024-12-09T23:28:25.026Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-09 23:28:24.713215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.713345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:44.390 [2024-12-09 23:28:24.713382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:44.390 [2024-12-09 23:28:24.713404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.713464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:44.390 [2024-12-09 23:28:24.716936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.717898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:44.390 [2024-12-09 23:28:24.718056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:46:44.390 [2024-12-09 23:28:24.718094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.718418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.718454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:44.390 [2024-12-09 23:28:24.718478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:46:44.390 [2024-12-09 23:28:24.718499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.723291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.723434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:44.390 [2024-12-09 23:28:24.723500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:46:44.390 [2024-12-09 23:28:24.723532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.729839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.730017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:44.390 [2024-12-09 23:28:24.730246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.269 ms 00:46:44.390 [2024-12-09 23:28:24.730289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.758560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.758750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:44.390 [2024-12-09 23:28:24.758950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.177 ms 00:46:44.390 [2024-12-09 23:28:24.759027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.775094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.775278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:44.390 [2024-12-09 23:28:24.775470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.940 ms 00:46:44.390 [2024-12-09 23:28:24.775513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.775686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.775714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:44.390 [2024-12-09 23:28:24.775811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:46:44.390 [2024-12-09 23:28:24.775832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.802580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.802751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:44.390 [2024-12-09 23:28:24.802812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.671 ms 00:46:44.390 [2024-12-09 23:28:24.802835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.828299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.828468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:44.390 [2024-12-09 23:28:24.828528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.385 ms 00:46:44.390 [2024-12-09 23:28:24.828550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.853497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.853668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:44.390 [2024-12-09 23:28:24.853740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.870 ms 00:46:44.390 [2024-12-09 23:28:24.853762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.390 [2024-12-09 23:28:24.878573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.390 [2024-12-09 23:28:24.878738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:44.391 [2024-12-09 23:28:24.878795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.590 ms 00:46:44.391 [2024-12-09 23:28:24.878816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.391 [2024-12-09 23:28:24.878902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:44.391 [2024-12-09 23:28:24.878943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.878979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.879969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:44.391 [2024-12-09 23:28:24.880770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:44.392 [2024-12-09 23:28:24.880896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:44.392 [2024-12-09 23:28:24.880906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6496dd5f-945a-4404-a378-a98a70535383 00:46:44.392 [2024-12-09 23:28:24.880915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:44.392 [2024-12-09 23:28:24.880923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:44.392 [2024-12-09 23:28:24.880931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:44.392 [2024-12-09 23:28:24.880940] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:44.392 [2024-12-09 23:28:24.880958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:44.392 [2024-12-09 23:28:24.880966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:44.392 [2024-12-09 23:28:24.880973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:44.392 [2024-12-09 23:28:24.880980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:44.392 [2024-12-09 23:28:24.881001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:44.392 [2024-12-09 23:28:24.881009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.392 [2024-12-09 23:28:24.881019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:44.392 [2024-12-09 23:28:24.881029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:46:44.392 [2024-12-09 23:28:24.881041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.894849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.392 [2024-12-09 23:28:24.895008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:44.392 [2024-12-09 23:28:24.895066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.781 ms 00:46:44.392 [2024-12-09 23:28:24.895092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.895518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.392 [2024-12-09 23:28:24.895555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:44.392 [2024-12-09 23:28:24.895639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:46:44.392 [2024-12-09 23:28:24.895660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.932527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.392 [2024-12-09 23:28:24.932702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:44.392 [2024-12-09 23:28:24.932765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.392 [2024-12-09 23:28:24.932790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.932878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.392 [2024-12-09 23:28:24.932901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:44.392 [2024-12-09 23:28:24.932928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.392 [2024-12-09 23:28:24.932947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.933069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.392 [2024-12-09 23:28:24.933098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:44.392 [2024-12-09 23:28:24.933120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.392 [2024-12-09 23:28:24.933213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:24.933250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.392 [2024-12-09 23:28:24.933271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:44.392 [2024-12-09 23:28:24.933291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.392 [2024-12-09 23:28:24.933316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.392 [2024-12-09 23:28:25.018098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.392 [2024-12-09 23:28:25.018332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:44.392 [2024-12-09 23:28:25.018392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.392 [2024-12-09 23:28:25.018415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.088124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.088317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:44.652 [2024-12-09 23:28:25.088386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.088408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.088483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.088506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:44.652 [2024-12-09 23:28:25.088526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.088545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.088614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.088638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:44.652 [2024-12-09 23:28:25.088659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.088743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.088880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.088981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:44.652 [2024-12-09 23:28:25.089033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.089084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.089202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.089232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:44.652 [2024-12-09 23:28:25.089244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.089253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.089305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.089314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:44.652 [2024-12-09 23:28:25.089323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.089331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.089379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:44.652 [2024-12-09 23:28:25.089390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:44.652 [2024-12-09 23:28:25.089398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:44.652 [2024-12-09 23:28:25.089406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.652 [2024-12-09 23:28:25.089550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.334 ms, result 0 00:46:45.224 00:46:45.224 00:46:45.526 23:28:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:47.468 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:46:47.468 23:28:27 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:46:47.468 [2024-12-09 23:28:27.889497] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:46:47.468 [2024-12-09 23:28:27.890359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79006 ] 00:46:47.468 [2024-12-09 23:28:28.059808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:47.729 [2024-12-09 23:28:28.178965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:47.990 [2024-12-09 23:28:28.479262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:47.990 [2024-12-09 23:28:28.479348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:48.252 [2024-12-09 23:28:28.641075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.641135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:48.252 [2024-12-09 23:28:28.641150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:48.252 [2024-12-09 23:28:28.641158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.641217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.641231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:48.252 [2024-12-09 23:28:28.641239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:46:48.252 [2024-12-09 23:28:28.641248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.641269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:48.252 [2024-12-09 23:28:28.642037] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:48.252 [2024-12-09 23:28:28.642059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.642068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:48.252 [2024-12-09 23:28:28.642078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:46:48.252 [2024-12-09 23:28:28.642086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.643900] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:48.252 [2024-12-09 23:28:28.657994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.658043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:48.252 [2024-12-09 23:28:28.658057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.096 ms 00:46:48.252 [2024-12-09 23:28:28.658066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.658156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.658181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:48.252 [2024-12-09 23:28:28.658190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:46:48.252 [2024-12-09 23:28:28.658198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.666645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.666687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:48.252 [2024-12-09 23:28:28.666697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.364 ms 00:46:48.252 [2024-12-09 23:28:28.666712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.666794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.666804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:48.252 [2024-12-09 23:28:28.666814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:46:48.252 [2024-12-09 23:28:28.666822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.666869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.666878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:48.252 [2024-12-09 23:28:28.666887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:48.252 [2024-12-09 23:28:28.666895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.666923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:48.252 [2024-12-09 23:28:28.671329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.671362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:48.252 [2024-12-09 23:28:28.671376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:46:48.252 [2024-12-09 23:28:28.671385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.671426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.252 [2024-12-09 23:28:28.671436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:48.252 [2024-12-09 23:28:28.671445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:46:48.252 [2024-12-09 23:28:28.671453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.252 [2024-12-09 23:28:28.671508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:48.252 [2024-12-09 23:28:28.671536] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:48.253 [2024-12-09 23:28:28.671572] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:48.253 [2024-12-09 23:28:28.671592] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:48.253 [2024-12-09 23:28:28.671700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:48.253 [2024-12-09 23:28:28.671711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:48.253 [2024-12-09 23:28:28.671723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:48.253 [2024-12-09 23:28:28.671734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:48.253 [2024-12-09 23:28:28.671744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:48.253 [2024-12-09 23:28:28.671754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:48.253 [2024-12-09 23:28:28.671762] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:48.253 [2024-12-09 23:28:28.671772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:48.253 [2024-12-09 23:28:28.671780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:48.253 [2024-12-09 23:28:28.671788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.253 [2024-12-09 23:28:28.671797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:48.253 [2024-12-09 23:28:28.671805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:46:48.253 [2024-12-09 23:28:28.671812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.253 [2024-12-09 23:28:28.671896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.253 [2024-12-09 23:28:28.671905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:48.253 [2024-12-09 23:28:28.671914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:46:48.253 [2024-12-09 23:28:28.671921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.253 [2024-12-09 23:28:28.672044] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:48.253 [2024-12-09 23:28:28.672057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:48.253 [2024-12-09 23:28:28.672066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:48.253 [2024-12-09 23:28:28.672089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:48.253 [2024-12-09 23:28:28.672112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:48.253 [2024-12-09 23:28:28.672127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:48.253 [2024-12-09 23:28:28.672135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:48.253 [2024-12-09 23:28:28.672141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:48.253 [2024-12-09 23:28:28.672156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:48.253 [2024-12-09 23:28:28.672163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:48.253 [2024-12-09 23:28:28.672170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:48.253 [2024-12-09 23:28:28.672185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:48.253 [2024-12-09 23:28:28.672210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:48.253 [2024-12-09 23:28:28.672231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:48.253 [2024-12-09 23:28:28.672251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:48.253 [2024-12-09 23:28:28.672271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:48.253 [2024-12-09 23:28:28.672292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:48.253 [2024-12-09 23:28:28.672305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:48.253 [2024-12-09 23:28:28.672312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:48.253 [2024-12-09 23:28:28.672318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:48.253 [2024-12-09 23:28:28.672325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:48.253 [2024-12-09 23:28:28.672331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:48.253 [2024-12-09 23:28:28.672339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:48.253 [2024-12-09 23:28:28.672352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:48.253 [2024-12-09 23:28:28.672360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:48.253 [2024-12-09 23:28:28.672375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:48.253 [2024-12-09 23:28:28.672384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:48.253 [2024-12-09 23:28:28.672399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:48.253 [2024-12-09 23:28:28.672406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:48.253 [2024-12-09 23:28:28.672412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:48.253 [2024-12-09 23:28:28.672421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:48.253 [2024-12-09 23:28:28.672428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:48.253 [2024-12-09 23:28:28.672435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:48.253 [2024-12-09 23:28:28.672445] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:48.253 [2024-12-09 23:28:28.672455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:48.253 [2024-12-09 23:28:28.672474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:48.253 [2024-12-09 23:28:28.672481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:48.253 [2024-12-09 23:28:28.672489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:48.253 [2024-12-09 23:28:28.672496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:48.253 [2024-12-09 23:28:28.672503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:48.253 [2024-12-09 23:28:28.672511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:48.253 [2024-12-09 23:28:28.672519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:48.253 [2024-12-09 23:28:28.672526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:48.253 [2024-12-09 23:28:28.672533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:48.253 [2024-12-09 23:28:28.672569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:48.253 [2024-12-09 23:28:28.672578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:48.253 [2024-12-09 23:28:28.672595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:48.253 [2024-12-09 23:28:28.672602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:48.253 [2024-12-09 23:28:28.672610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:48.253 [2024-12-09 23:28:28.672617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.253 [2024-12-09 23:28:28.672625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:48.253 [2024-12-09 23:28:28.672634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:46:48.253 [2024-12-09 23:28:28.672641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.253 [2024-12-09 23:28:28.705212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.253 [2024-12-09 23:28:28.705258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:48.253 [2024-12-09 23:28:28.705270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.523 ms 00:46:48.254 [2024-12-09 23:28:28.705283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.705376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.705385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:48.254 [2024-12-09 23:28:28.705394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:46:48.254 [2024-12-09 23:28:28.705402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.750366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.750415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:48.254 [2024-12-09 23:28:28.750429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.901 ms 00:46:48.254 [2024-12-09 23:28:28.750438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.750489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.750500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:48.254 [2024-12-09 23:28:28.750514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:48.254 [2024-12-09 23:28:28.750522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.751147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.751172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:48.254 [2024-12-09 23:28:28.751183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:46:48.254 [2024-12-09 23:28:28.751192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.751351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.751362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:48.254 [2024-12-09 23:28:28.751377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:46:48.254 [2024-12-09 23:28:28.751385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.767275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.767320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:48.254 [2024-12-09 23:28:28.767332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.870 ms 00:46:48.254 [2024-12-09 23:28:28.767341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.781868] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:48.254 [2024-12-09 23:28:28.781916] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:48.254 [2024-12-09 23:28:28.781931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.781939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:48.254 [2024-12-09 23:28:28.781949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.475 ms 00:46:48.254 [2024-12-09 23:28:28.781956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.807976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.808030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:48.254 [2024-12-09 23:28:28.808043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.949 ms 00:46:48.254 [2024-12-09 23:28:28.808051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.821095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.821138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:48.254 [2024-12-09 23:28:28.821150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:46:48.254 [2024-12-09 23:28:28.821157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.833941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.833997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:48.254 [2024-12-09 23:28:28.834008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:46:48.254 [2024-12-09 23:28:28.834016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.254 [2024-12-09 23:28:28.834671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.254 [2024-12-09 23:28:28.834689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:48.254 [2024-12-09 23:28:28.834704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:46:48.254 [2024-12-09 23:28:28.834712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.515 [2024-12-09 23:28:28.902483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.515 [2024-12-09 23:28:28.902551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:48.515 [2024-12-09 23:28:28.902576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.748 ms 00:46:48.515 [2024-12-09 23:28:28.902586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.515 [2024-12-09 23:28:28.914524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:48.515 [2024-12-09 23:28:28.917844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.515 [2024-12-09 23:28:28.917886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:48.515 [2024-12-09 23:28:28.917899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.189 ms 00:46:48.515 [2024-12-09 23:28:28.917908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.918024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.918038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:48.516 [2024-12-09 23:28:28.918052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:46:48.516 [2024-12-09 23:28:28.918061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.918137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.918148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:48.516 [2024-12-09 23:28:28.918158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:46:48.516 [2024-12-09 23:28:28.918167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.918191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.918201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:48.516 [2024-12-09 23:28:28.918210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:48.516 [2024-12-09 23:28:28.918219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.918256] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:48.516 [2024-12-09 23:28:28.918268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.918277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:48.516 [2024-12-09 23:28:28.918285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:46:48.516 [2024-12-09 23:28:28.918294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.944286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.944335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:48.516 [2024-12-09 23:28:28.944356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.969 ms 00:46:48.516 [2024-12-09 23:28:28.944364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.944455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.516 [2024-12-09 23:28:28.944467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:48.516 [2024-12-09 23:28:28.944477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:46:48.516 [2024-12-09 23:28:28.944485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.516 [2024-12-09 23:28:28.946202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.577 ms, result 0 00:46:49.461  [2024-12-09T23:28:31.042Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T23:28:31.985Z] Copying: 32/1024 [MB] (15 MBps) [2024-12-09T23:28:33.372Z] Copying: 59/1024 [MB] (27 MBps) [2024-12-09T23:28:34.315Z] Copying: 72/1024 [MB] (12 MBps) [2024-12-09T23:28:35.259Z] Copying: 102/1024 [MB] (30 MBps) [2024-12-09T23:28:36.201Z] Copying: 118/1024 [MB] (15 MBps) [2024-12-09T23:28:37.144Z] Copying: 135/1024 [MB] (17 MBps) [2024-12-09T23:28:38.087Z] Copying: 171/1024 [MB] (35 MBps) [2024-12-09T23:28:39.030Z] Copying: 208/1024 [MB] (37 MBps) [2024-12-09T23:28:39.972Z] Copying: 239/1024 [MB] (30 MBps) [2024-12-09T23:28:41.358Z] Copying: 264/1024 [MB] (25 MBps) [2024-12-09T23:28:42.302Z] Copying: 292/1024 [MB] (27 MBps) [2024-12-09T23:28:43.246Z] Copying: 328/1024 [MB] (35 MBps) [2024-12-09T23:28:44.190Z] Copying: 348/1024 [MB] (20 MBps) [2024-12-09T23:28:45.134Z] Copying: 361/1024 [MB] (13 MBps) [2024-12-09T23:28:46.078Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-09T23:28:47.021Z] Copying: 387/1024 [MB] (15 MBps) [2024-12-09T23:28:47.963Z] Copying: 398/1024 [MB] (10 MBps) [2024-12-09T23:28:49.348Z] Copying: 422/1024 [MB] (24 MBps) [2024-12-09T23:28:50.289Z] Copying: 441/1024 [MB] (18 MBps) [2024-12-09T23:28:51.233Z] Copying: 462/1024 [MB] (20 MBps) [2024-12-09T23:28:52.204Z] Copying: 481/1024 [MB] (19 MBps) [2024-12-09T23:28:53.147Z] Copying: 499/1024 [MB] (17 MBps) [2024-12-09T23:28:54.091Z] Copying: 517/1024 [MB] (18 MBps) [2024-12-09T23:28:55.031Z] Copying: 536/1024 [MB] (18 MBps) [2024-12-09T23:28:55.974Z] Copying: 557/1024 [MB] (20 MBps) [2024-12-09T23:28:57.360Z] Copying: 569/1024 [MB] (12 MBps) [2024-12-09T23:28:58.303Z] Copying: 579/1024 [MB] (10 MBps) [2024-12-09T23:28:59.247Z] Copying: 592/1024 [MB] (12 MBps) [2024-12-09T23:29:00.191Z] Copying: 605/1024 [MB] (12 MBps) [2024-12-09T23:29:01.135Z] Copying: 619/1024 [MB] (14 MBps) [2024-12-09T23:29:02.078Z] Copying: 636/1024 [MB] (16 MBps) [2024-12-09T23:29:03.023Z] Copying: 652/1024 [MB] (15 MBps) [2024-12-09T23:29:03.967Z] Copying: 662/1024 [MB] (10 MBps) [2024-12-09T23:29:05.354Z] Copying: 673/1024 [MB] (10 MBps) [2024-12-09T23:29:06.297Z] Copying: 692/1024 [MB] (19 MBps) [2024-12-09T23:29:07.242Z] Copying: 707/1024 [MB] (14 MBps) [2024-12-09T23:29:08.186Z] Copying: 725/1024 [MB] (17 MBps) [2024-12-09T23:29:09.130Z] Copying: 736/1024 [MB] (10 MBps) [2024-12-09T23:29:10.074Z] Copying: 747/1024 [MB] (11 MBps) [2024-12-09T23:29:11.018Z] Copying: 760/1024 [MB] (13 MBps) [2024-12-09T23:29:11.963Z] Copying: 775/1024 [MB] (14 MBps) [2024-12-09T23:29:13.362Z] Copying: 792/1024 [MB] (17 MBps) [2024-12-09T23:29:13.968Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-09T23:29:15.353Z] Copying: 815/1024 [MB] (11 MBps) [2024-12-09T23:29:16.296Z] Copying: 827/1024 [MB] (11 MBps) [2024-12-09T23:29:17.238Z] Copying: 839/1024 [MB] (12 MBps) [2024-12-09T23:29:18.181Z] Copying: 855/1024 [MB] (16 MBps) [2024-12-09T23:29:19.124Z] Copying: 870/1024 [MB] (14 MBps) [2024-12-09T23:29:20.068Z] Copying: 887/1024 [MB] (16 MBps) [2024-12-09T23:29:21.009Z] Copying: 905/1024 [MB] (18 MBps) [2024-12-09T23:29:22.395Z] Copying: 920/1024 [MB] (14 MBps) [2024-12-09T23:29:22.967Z] Copying: 936/1024 [MB] (15 MBps) [2024-12-09T23:29:24.354Z] Copying: 949/1024 [MB] (13 MBps) [2024-12-09T23:29:25.297Z] Copying: 965/1024 [MB] (16 MBps) [2024-12-09T23:29:26.240Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-09T23:29:27.184Z] Copying: 988/1024 [MB] (12 MBps) [2024-12-09T23:29:28.129Z] Copying: 1001/1024 [MB] (12 MBps) [2024-12-09T23:29:29.071Z] Copying: 1035316/1048576 [kB] (10144 kBps) [2024-12-09T23:29:30.015Z] Copying: 1021/1024 [MB] (10 MBps) [2024-12-09T23:29:30.015Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-09 23:29:29.936081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:29.936158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:49.379 [2024-12-09 23:29:29.936187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:49.379 [2024-12-09 23:29:29.936197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.379 [2024-12-09 23:29:29.938860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:49.379 [2024-12-09 23:29:29.943092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:29.943146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:49.379 [2024-12-09 23:29:29.943160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.173 ms 00:47:49.379 [2024-12-09 23:29:29.943168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.379 [2024-12-09 23:29:29.954936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:29.955003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:49.379 [2024-12-09 23:29:29.955017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.123 ms 00:47:49.379 [2024-12-09 23:29:29.955034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.379 [2024-12-09 23:29:29.977586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:29.977641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:49.379 [2024-12-09 23:29:29.977654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.531 ms 00:47:49.379 [2024-12-09 23:29:29.977662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.379 [2024-12-09 23:29:29.983802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:29.983851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:49.379 [2024-12-09 23:29:29.983863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:47:49.379 [2024-12-09 23:29:29.983879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.379 [2024-12-09 23:29:30.010976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.379 [2024-12-09 23:29:30.011040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:49.379 [2024-12-09 23:29:30.011054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.050 ms 00:47:49.379 [2024-12-09 23:29:30.011062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.641 [2024-12-09 23:29:30.027509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.641 [2024-12-09 23:29:30.027566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:49.641 [2024-12-09 23:29:30.027580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.394 ms 00:47:49.641 [2024-12-09 23:29:30.027588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.641 [2024-12-09 23:29:30.259931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.641 [2024-12-09 23:29:30.260005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:49.641 [2024-12-09 23:29:30.260021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 232.288 ms 00:47:49.641 [2024-12-09 23:29:30.260030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.903 [2024-12-09 23:29:30.286833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.903 [2024-12-09 23:29:30.286881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:49.903 [2024-12-09 23:29:30.286894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.786 ms 00:47:49.903 [2024-12-09 23:29:30.286903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.903 [2024-12-09 23:29:30.312042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.903 [2024-12-09 23:29:30.312092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:49.903 [2024-12-09 23:29:30.312104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.090 ms 00:47:49.903 [2024-12-09 23:29:30.312112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.903 [2024-12-09 23:29:30.336773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.903 [2024-12-09 23:29:30.336831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:49.903 [2024-12-09 23:29:30.336844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.613 ms 00:47:49.903 [2024-12-09 23:29:30.336851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.903 [2024-12-09 23:29:30.361843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.903 [2024-12-09 23:29:30.361896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:49.903 [2024-12-09 23:29:30.361909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.916 ms 00:47:49.903 [2024-12-09 23:29:30.361918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.903 [2024-12-09 23:29:30.361964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:49.903 [2024-12-09 23:29:30.361980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92928 / 261120 wr_cnt: 1 state: open 00:47:49.903 [2024-12-09 23:29:30.362012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:49.903 [2024-12-09 23:29:30.362149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:49.904 [2024-12-09 23:29:30.362970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:49.904 [2024-12-09 23:29:30.362979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6496dd5f-945a-4404-a378-a98a70535383 00:47:49.904 [2024-12-09 23:29:30.362998] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92928 00:47:49.904 [2024-12-09 23:29:30.363006] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93888 00:47:49.905 [2024-12-09 23:29:30.363013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92928 00:47:49.905 [2024-12-09 23:29:30.363023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0103 00:47:49.905 [2024-12-09 23:29:30.363043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:49.905 [2024-12-09 23:29:30.363052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:49.905 [2024-12-09 23:29:30.363060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:49.905 [2024-12-09 23:29:30.363067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:49.905 [2024-12-09 23:29:30.363074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:49.905 [2024-12-09 23:29:30.363082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.905 [2024-12-09 23:29:30.363090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:49.905 [2024-12-09 23:29:30.363099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:47:49.905 [2024-12-09 23:29:30.363107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.376571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.905 [2024-12-09 23:29:30.376617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:49.905 [2024-12-09 23:29:30.376635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:47:49.905 [2024-12-09 23:29:30.376643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.377069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.905 [2024-12-09 23:29:30.377094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:49.905 [2024-12-09 23:29:30.377106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:47:49.905 [2024-12-09 23:29:30.377113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.413480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.905 [2024-12-09 23:29:30.413530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:49.905 [2024-12-09 23:29:30.413542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.905 [2024-12-09 23:29:30.413552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.413628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.905 [2024-12-09 23:29:30.413638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:49.905 [2024-12-09 23:29:30.413648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.905 [2024-12-09 23:29:30.413658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.413752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.905 [2024-12-09 23:29:30.413770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:49.905 [2024-12-09 23:29:30.413780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.905 [2024-12-09 23:29:30.413789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.413807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.905 [2024-12-09 23:29:30.413817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:49.905 [2024-12-09 23:29:30.413826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.905 [2024-12-09 23:29:30.413835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.905 [2024-12-09 23:29:30.497203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.905 [2024-12-09 23:29:30.497267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:49.905 [2024-12-09 23:29:30.497280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.905 [2024-12-09 23:29:30.497289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:50.166 [2024-12-09 23:29:30.566167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:50.166 [2024-12-09 23:29:30.566273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:50.166 [2024-12-09 23:29:30.566347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:50.166 [2024-12-09 23:29:30.566473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:50.166 [2024-12-09 23:29:30.566536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:50.166 [2024-12-09 23:29:30.566605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:50.166 [2024-12-09 23:29:30.566675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:50.166 [2024-12-09 23:29:30.566684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:50.166 [2024-12-09 23:29:30.566692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:50.166 [2024-12-09 23:29:30.566828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 631.328 ms, result 0 00:47:51.554 00:47:51.554 00:47:51.554 23:29:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:47:51.554 [2024-12-09 23:29:32.091911] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:47:51.554 [2024-12-09 23:29:32.092080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79657 ] 00:47:51.815 [2024-12-09 23:29:32.257125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.815 [2024-12-09 23:29:32.380000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.076 [2024-12-09 23:29:32.676663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:52.076 [2024-12-09 23:29:32.676753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:52.339 [2024-12-09 23:29:32.837931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.838020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:52.339 [2024-12-09 23:29:32.838037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:52.339 [2024-12-09 23:29:32.838046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.838104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.838118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:52.339 [2024-12-09 23:29:32.838127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:47:52.339 [2024-12-09 23:29:32.838135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.838155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:52.339 [2024-12-09 23:29:32.838893] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:52.339 [2024-12-09 23:29:32.838923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.838931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:52.339 [2024-12-09 23:29:32.838941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:47:52.339 [2024-12-09 23:29:32.838950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.840669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:52.339 [2024-12-09 23:29:32.854852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.854903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:52.339 [2024-12-09 23:29:32.854916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.185 ms 00:47:52.339 [2024-12-09 23:29:32.854925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.855021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.855033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:52.339 [2024-12-09 23:29:32.855042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:47:52.339 [2024-12-09 23:29:32.855051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.863177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.863224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:52.339 [2024-12-09 23:29:32.863236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.048 ms 00:47:52.339 [2024-12-09 23:29:32.863250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.863330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.863340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:52.339 [2024-12-09 23:29:32.863348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:47:52.339 [2024-12-09 23:29:32.863357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.863399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.863409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:52.339 [2024-12-09 23:29:32.863417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:52.339 [2024-12-09 23:29:32.863425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.863450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:52.339 [2024-12-09 23:29:32.867549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.867589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:52.339 [2024-12-09 23:29:32.867604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.104 ms 00:47:52.339 [2024-12-09 23:29:32.867611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.867649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.867658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:52.339 [2024-12-09 23:29:32.867667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:47:52.339 [2024-12-09 23:29:32.867675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.867727] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:52.339 [2024-12-09 23:29:32.867752] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:52.339 [2024-12-09 23:29:32.867789] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:52.339 [2024-12-09 23:29:32.867808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:52.339 [2024-12-09 23:29:32.867915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:52.339 [2024-12-09 23:29:32.867928] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:52.339 [2024-12-09 23:29:32.867940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:52.339 [2024-12-09 23:29:32.867951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:52.339 [2024-12-09 23:29:32.867961] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:52.339 [2024-12-09 23:29:32.867969] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:52.339 [2024-12-09 23:29:32.867977] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:52.339 [2024-12-09 23:29:32.868004] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:52.339 [2024-12-09 23:29:32.868012] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:52.339 [2024-12-09 23:29:32.868020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.868029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:52.339 [2024-12-09 23:29:32.868037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:47:52.339 [2024-12-09 23:29:32.868045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.868128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.339 [2024-12-09 23:29:32.868138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:52.339 [2024-12-09 23:29:32.868146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:47:52.339 [2024-12-09 23:29:32.868154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.339 [2024-12-09 23:29:32.868261] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:52.339 [2024-12-09 23:29:32.868281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:52.339 [2024-12-09 23:29:32.868290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:52.339 [2024-12-09 23:29:32.868298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:52.339 [2024-12-09 23:29:32.868314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:52.339 [2024-12-09 23:29:32.868329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:52.339 [2024-12-09 23:29:32.868337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:52.339 [2024-12-09 23:29:32.868350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:52.339 [2024-12-09 23:29:32.868357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:52.339 [2024-12-09 23:29:32.868365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:52.339 [2024-12-09 23:29:32.868379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:52.339 [2024-12-09 23:29:32.868386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:52.339 [2024-12-09 23:29:32.868395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:52.339 [2024-12-09 23:29:32.868409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:52.339 [2024-12-09 23:29:32.868416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:52.339 [2024-12-09 23:29:32.868430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.339 [2024-12-09 23:29:32.868443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:52.339 [2024-12-09 23:29:32.868449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:52.339 [2024-12-09 23:29:32.868455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.339 [2024-12-09 23:29:32.868463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:52.339 [2024-12-09 23:29:32.868469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.340 [2024-12-09 23:29:32.868483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:52.340 [2024-12-09 23:29:32.868490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.340 [2024-12-09 23:29:32.868503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:52.340 [2024-12-09 23:29:32.868510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:52.340 [2024-12-09 23:29:32.868523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:52.340 [2024-12-09 23:29:32.868530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:52.340 [2024-12-09 23:29:32.868536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:52.340 [2024-12-09 23:29:32.868543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:52.340 [2024-12-09 23:29:32.868550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:52.340 [2024-12-09 23:29:32.868556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:52.340 [2024-12-09 23:29:32.868569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:52.340 [2024-12-09 23:29:32.868576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868583] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:52.340 [2024-12-09 23:29:32.868591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:52.340 [2024-12-09 23:29:32.868600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:52.340 [2024-12-09 23:29:32.868607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.340 [2024-12-09 23:29:32.868616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:52.340 [2024-12-09 23:29:32.868624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:52.340 [2024-12-09 23:29:32.868631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:52.340 [2024-12-09 23:29:32.868638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:52.340 [2024-12-09 23:29:32.868644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:52.340 [2024-12-09 23:29:32.868650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:52.340 [2024-12-09 23:29:32.868658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:52.340 [2024-12-09 23:29:32.868668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:52.340 [2024-12-09 23:29:32.868687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:52.340 [2024-12-09 23:29:32.868695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:52.340 [2024-12-09 23:29:32.868702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:52.340 [2024-12-09 23:29:32.868709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:52.340 [2024-12-09 23:29:32.868716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:52.340 [2024-12-09 23:29:32.868724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:52.340 [2024-12-09 23:29:32.868731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:52.340 [2024-12-09 23:29:32.868738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:52.340 [2024-12-09 23:29:32.868745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:52.340 [2024-12-09 23:29:32.868783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:52.340 [2024-12-09 23:29:32.868791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:52.340 [2024-12-09 23:29:32.868807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:52.340 [2024-12-09 23:29:32.868814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:52.340 [2024-12-09 23:29:32.868822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:52.340 [2024-12-09 23:29:32.868830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.868838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:52.340 [2024-12-09 23:29:32.868847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:47:52.340 [2024-12-09 23:29:32.868855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.900698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.900748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:52.340 [2024-12-09 23:29:32.900765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.793 ms 00:47:52.340 [2024-12-09 23:29:32.900776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.900867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.900876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:52.340 [2024-12-09 23:29:32.900886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:47:52.340 [2024-12-09 23:29:32.900898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.946131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.946186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:52.340 [2024-12-09 23:29:32.946200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.172 ms 00:47:52.340 [2024-12-09 23:29:32.946209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.946259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.946274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:52.340 [2024-12-09 23:29:32.946283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:52.340 [2024-12-09 23:29:32.946291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.946913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.946951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:52.340 [2024-12-09 23:29:32.946962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:47:52.340 [2024-12-09 23:29:32.946971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.947156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.947174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:52.340 [2024-12-09 23:29:32.947183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:47:52.340 [2024-12-09 23:29:32.947191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.340 [2024-12-09 23:29:32.962815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.340 [2024-12-09 23:29:32.962863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:52.340 [2024-12-09 23:29:32.962874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.604 ms 00:47:52.340 [2024-12-09 23:29:32.962882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:32.977421] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:52.602 [2024-12-09 23:29:32.977475] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:52.602 [2024-12-09 23:29:32.977489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:32.977498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:52.602 [2024-12-09 23:29:32.977508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:47:52.602 [2024-12-09 23:29:32.977515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.003277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.003328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:52.602 [2024-12-09 23:29:33.003340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:47:52.602 [2024-12-09 23:29:33.003349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.016381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.016429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:52.602 [2024-12-09 23:29:33.016441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.977 ms 00:47:52.602 [2024-12-09 23:29:33.016448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.029266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.029313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:52.602 [2024-12-09 23:29:33.029325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.770 ms 00:47:52.602 [2024-12-09 23:29:33.029333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.030013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.030053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:52.602 [2024-12-09 23:29:33.030063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:47:52.602 [2024-12-09 23:29:33.030071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.096016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.096084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:52.602 [2024-12-09 23:29:33.096100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.922 ms 00:47:52.602 [2024-12-09 23:29:33.096109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.107650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:52.602 [2024-12-09 23:29:33.110860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.110905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:52.602 [2024-12-09 23:29:33.110917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:47:52.602 [2024-12-09 23:29:33.110925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.111028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.111041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:52.602 [2024-12-09 23:29:33.111055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:47:52.602 [2024-12-09 23:29:33.111064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.112686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.112736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:52.602 [2024-12-09 23:29:33.112746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:47:52.602 [2024-12-09 23:29:33.112755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.112784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.112794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:52.602 [2024-12-09 23:29:33.112803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:52.602 [2024-12-09 23:29:33.112817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.112856] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:52.602 [2024-12-09 23:29:33.112867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.112876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:52.602 [2024-12-09 23:29:33.112886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:52.602 [2024-12-09 23:29:33.112893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.138764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.138824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:52.602 [2024-12-09 23:29:33.138844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.850 ms 00:47:52.602 [2024-12-09 23:29:33.138852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.138936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.602 [2024-12-09 23:29:33.138946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:52.602 [2024-12-09 23:29:33.138956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:47:52.602 [2024-12-09 23:29:33.138964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.602 [2024-12-09 23:29:33.142802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.598 ms, result 0 00:47:53.989  [2024-12-09T23:29:35.567Z] Copying: 7724/1048576 [kB] (7724 kBps) [2024-12-09T23:29:36.514Z] Copying: 22/1024 [MB] (15 MBps) [2024-12-09T23:29:37.520Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-09T23:29:38.464Z] Copying: 60/1024 [MB] (17 MBps) [2024-12-09T23:29:39.408Z] Copying: 77/1024 [MB] (16 MBps) [2024-12-09T23:29:40.351Z] Copying: 94/1024 [MB] (17 MBps) [2024-12-09T23:29:41.736Z] Copying: 106/1024 [MB] (11 MBps) [2024-12-09T23:29:42.680Z] Copying: 124/1024 [MB] (17 MBps) [2024-12-09T23:29:43.623Z] Copying: 150/1024 [MB] (26 MBps) [2024-12-09T23:29:44.567Z] Copying: 171/1024 [MB] (20 MBps) [2024-12-09T23:29:45.510Z] Copying: 192/1024 [MB] (21 MBps) [2024-12-09T23:29:46.454Z] Copying: 212/1024 [MB] (19 MBps) [2024-12-09T23:29:47.398Z] Copying: 231/1024 [MB] (19 MBps) [2024-12-09T23:29:48.350Z] Copying: 254/1024 [MB] (22 MBps) [2024-12-09T23:29:49.737Z] Copying: 274/1024 [MB] (20 MBps) [2024-12-09T23:29:50.681Z] Copying: 296/1024 [MB] (22 MBps) [2024-12-09T23:29:51.623Z] Copying: 317/1024 [MB] (20 MBps) [2024-12-09T23:29:52.567Z] Copying: 328/1024 [MB] (11 MBps) [2024-12-09T23:29:53.510Z] Copying: 341/1024 [MB] (12 MBps) [2024-12-09T23:29:54.453Z] Copying: 351/1024 [MB] (10 MBps) [2024-12-09T23:29:55.394Z] Copying: 362/1024 [MB] (10 MBps) [2024-12-09T23:29:56.336Z] Copying: 373/1024 [MB] (11 MBps) [2024-12-09T23:29:57.720Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-09T23:29:58.662Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-09T23:29:59.609Z] Copying: 405/1024 [MB] (10 MBps) [2024-12-09T23:30:00.591Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-09T23:30:01.534Z] Copying: 430/1024 [MB] (13 MBps) [2024-12-09T23:30:02.477Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-09T23:30:03.421Z] Copying: 457/1024 [MB] (16 MBps) [2024-12-09T23:30:04.365Z] Copying: 474/1024 [MB] (17 MBps) [2024-12-09T23:30:05.754Z] Copying: 488/1024 [MB] (13 MBps) [2024-12-09T23:30:06.698Z] Copying: 505/1024 [MB] (17 MBps) [2024-12-09T23:30:07.641Z] Copying: 521/1024 [MB] (15 MBps) [2024-12-09T23:30:08.586Z] Copying: 544/1024 [MB] (22 MBps) [2024-12-09T23:30:09.529Z] Copying: 561/1024 [MB] (17 MBps) [2024-12-09T23:30:10.474Z] Copying: 572/1024 [MB] (10 MBps) [2024-12-09T23:30:11.419Z] Copying: 582/1024 [MB] (10 MBps) [2024-12-09T23:30:12.363Z] Copying: 595/1024 [MB] (12 MBps) [2024-12-09T23:30:13.752Z] Copying: 610/1024 [MB] (14 MBps) [2024-12-09T23:30:14.696Z] Copying: 625/1024 [MB] (15 MBps) [2024-12-09T23:30:15.636Z] Copying: 640/1024 [MB] (15 MBps) [2024-12-09T23:30:16.577Z] Copying: 652/1024 [MB] (11 MBps) [2024-12-09T23:30:17.520Z] Copying: 667/1024 [MB] (15 MBps) [2024-12-09T23:30:18.462Z] Copying: 677/1024 [MB] (10 MBps) [2024-12-09T23:30:19.406Z] Copying: 691/1024 [MB] (13 MBps) [2024-12-09T23:30:20.351Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-09T23:30:21.738Z] Copying: 723/1024 [MB] (20 MBps) [2024-12-09T23:30:22.682Z] Copying: 737/1024 [MB] (13 MBps) [2024-12-09T23:30:23.488Z] Copying: 752/1024 [MB] (15 MBps) [2024-12-09T23:30:24.432Z] Copying: 770/1024 [MB] (18 MBps) [2024-12-09T23:30:25.375Z] Copying: 782/1024 [MB] (12 MBps) [2024-12-09T23:30:26.762Z] Copying: 795/1024 [MB] (13 MBps) [2024-12-09T23:30:27.335Z] Copying: 816/1024 [MB] (20 MBps) [2024-12-09T23:30:28.721Z] Copying: 829/1024 [MB] (12 MBps) [2024-12-09T23:30:29.665Z] Copying: 843/1024 [MB] (14 MBps) [2024-12-09T23:30:30.609Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-09T23:30:31.551Z] Copying: 865/1024 [MB] (10 MBps) [2024-12-09T23:30:32.494Z] Copying: 875/1024 [MB] (10 MBps) [2024-12-09T23:30:33.437Z] Copying: 885/1024 [MB] (10 MBps) [2024-12-09T23:30:34.381Z] Copying: 896/1024 [MB] (10 MBps) [2024-12-09T23:30:35.768Z] Copying: 906/1024 [MB] (10 MBps) [2024-12-09T23:30:36.338Z] Copying: 916/1024 [MB] (10 MBps) [2024-12-09T23:30:37.724Z] Copying: 927/1024 [MB] (10 MBps) [2024-12-09T23:30:38.669Z] Copying: 938/1024 [MB] (10 MBps) [2024-12-09T23:30:39.613Z] Copying: 950/1024 [MB] (11 MBps) [2024-12-09T23:30:40.557Z] Copying: 960/1024 [MB] (10 MBps) [2024-12-09T23:30:41.500Z] Copying: 970/1024 [MB] (10 MBps) [2024-12-09T23:30:42.443Z] Copying: 981/1024 [MB] (10 MBps) [2024-12-09T23:30:43.015Z] Copying: 1005/1024 [MB] (24 MBps) [2024-12-09T23:30:43.015Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-09 23:30:42.862817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.379 [2024-12-09 23:30:42.862889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:02.379 [2024-12-09 23:30:42.862920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:02.379 [2024-12-09 23:30:42.862931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.379 [2024-12-09 23:30:42.862957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:02.379 [2024-12-09 23:30:42.866164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.379 [2024-12-09 23:30:42.866206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:02.380 [2024-12-09 23:30:42.866218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:49:02.380 [2024-12-09 23:30:42.866227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.380 [2024-12-09 23:30:42.866475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.380 [2024-12-09 23:30:42.866488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:02.380 [2024-12-09 23:30:42.866503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:49:02.380 [2024-12-09 23:30:42.866512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.380 [2024-12-09 23:30:42.874074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.380 [2024-12-09 23:30:42.874122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:02.380 [2024-12-09 23:30:42.874133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.545 ms 00:49:02.380 [2024-12-09 23:30:42.874142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.380 [2024-12-09 23:30:42.881037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.380 [2024-12-09 23:30:42.881080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:02.380 [2024-12-09 23:30:42.881092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:49:02.380 [2024-12-09 23:30:42.881108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.380 [2024-12-09 23:30:42.907615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.380 [2024-12-09 23:30:42.907670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:02.380 [2024-12-09 23:30:42.907683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.461 ms 00:49:02.380 [2024-12-09 23:30:42.907691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.380 [2024-12-09 23:30:42.924661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.380 [2024-12-09 23:30:42.924730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:02.380 [2024-12-09 23:30:42.924744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.918 ms 00:49:02.380 [2024-12-09 23:30:42.924753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.296351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.952 [2024-12-09 23:30:43.296412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:02.952 [2024-12-09 23:30:43.296426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 371.538 ms 00:49:02.952 [2024-12-09 23:30:43.296435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.323819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.952 [2024-12-09 23:30:43.323871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:02.952 [2024-12-09 23:30:43.323884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.359 ms 00:49:02.952 [2024-12-09 23:30:43.323892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.350768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.952 [2024-12-09 23:30:43.350821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:02.952 [2024-12-09 23:30:43.350834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.823 ms 00:49:02.952 [2024-12-09 23:30:43.350842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.376593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.952 [2024-12-09 23:30:43.376646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:02.952 [2024-12-09 23:30:43.376658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.701 ms 00:49:02.952 [2024-12-09 23:30:43.376665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.402417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.952 [2024-12-09 23:30:43.402468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:02.952 [2024-12-09 23:30:43.402481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.659 ms 00:49:02.952 [2024-12-09 23:30:43.402488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.952 [2024-12-09 23:30:43.402536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:02.952 [2024-12-09 23:30:43.402553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:49:02.952 [2024-12-09 23:30:43.402565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:02.952 [2024-12-09 23:30:43.402862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.402976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:02.953 [2024-12-09 23:30:43.403422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:02.953 [2024-12-09 23:30:43.403432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6496dd5f-945a-4404-a378-a98a70535383 00:49:02.953 [2024-12-09 23:30:43.403441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:49:02.953 [2024-12-09 23:30:43.403449] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 39104 00:49:02.953 [2024-12-09 23:30:43.403457] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 38144 00:49:02.953 [2024-12-09 23:30:43.403469] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0252 00:49:02.953 [2024-12-09 23:30:43.403477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:02.953 [2024-12-09 23:30:43.403493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:02.953 [2024-12-09 23:30:43.403500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:02.953 [2024-12-09 23:30:43.403507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:02.953 [2024-12-09 23:30:43.403514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:02.953 [2024-12-09 23:30:43.403522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.953 [2024-12-09 23:30:43.403530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:02.953 [2024-12-09 23:30:43.403540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:49:02.953 [2024-12-09 23:30:43.403548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.953 [2024-12-09 23:30:43.417047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.953 [2024-12-09 23:30:43.417102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:02.953 [2024-12-09 23:30:43.417113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.463 ms 00:49:02.953 [2024-12-09 23:30:43.417121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.953 [2024-12-09 23:30:43.417517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:02.953 [2024-12-09 23:30:43.417537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:02.953 [2024-12-09 23:30:43.417547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:49:02.953 [2024-12-09 23:30:43.417555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.953 [2024-12-09 23:30:43.454571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:02.953 [2024-12-09 23:30:43.454624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:02.953 [2024-12-09 23:30:43.454636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:02.953 [2024-12-09 23:30:43.454646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.953 [2024-12-09 23:30:43.454721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:02.953 [2024-12-09 23:30:43.454731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:02.954 [2024-12-09 23:30:43.454740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:02.954 [2024-12-09 23:30:43.454751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.954 [2024-12-09 23:30:43.454820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:02.954 [2024-12-09 23:30:43.454838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:02.954 [2024-12-09 23:30:43.454847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:02.954 [2024-12-09 23:30:43.454856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.954 [2024-12-09 23:30:43.454874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:02.954 [2024-12-09 23:30:43.454883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:02.954 [2024-12-09 23:30:43.454893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:02.954 [2024-12-09 23:30:43.454901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:02.954 [2024-12-09 23:30:43.540935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:02.954 [2024-12-09 23:30:43.541019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:02.954 [2024-12-09 23:30:43.541034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:02.954 [2024-12-09 23:30:43.541044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.214 [2024-12-09 23:30:43.611415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.214 [2024-12-09 23:30:43.611476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:03.214 [2024-12-09 23:30:43.611490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.214 [2024-12-09 23:30:43.611499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.214 [2024-12-09 23:30:43.611579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.214 [2024-12-09 23:30:43.611590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:03.214 [2024-12-09 23:30:43.611606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.214 [2024-12-09 23:30:43.611615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.214 [2024-12-09 23:30:43.611655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.215 [2024-12-09 23:30:43.611665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:03.215 [2024-12-09 23:30:43.611674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.215 [2024-12-09 23:30:43.611682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.215 [2024-12-09 23:30:43.611785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.215 [2024-12-09 23:30:43.611796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:03.215 [2024-12-09 23:30:43.611805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.215 [2024-12-09 23:30:43.611816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.215 [2024-12-09 23:30:43.611848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.215 [2024-12-09 23:30:43.611858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:03.215 [2024-12-09 23:30:43.611866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.215 [2024-12-09 23:30:43.611875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.215 [2024-12-09 23:30:43.611917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.215 [2024-12-09 23:30:43.611927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:03.215 [2024-12-09 23:30:43.611936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.215 [2024-12-09 23:30:43.611947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.215 [2024-12-09 23:30:43.612017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:03.215 [2024-12-09 23:30:43.612028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:03.215 [2024-12-09 23:30:43.612038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:03.215 [2024-12-09 23:30:43.612046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:03.215 [2024-12-09 23:30:43.612192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 749.328 ms, result 0 00:49:03.787 00:49:03.787 00:49:03.787 23:30:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:49:06.435 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77362 00:49:06.435 23:30:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77362 ']' 00:49:06.435 23:30:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77362 00:49:06.435 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77362) - No such process 00:49:06.435 Process with pid 77362 is not found 00:49:06.435 23:30:46 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77362 is not found' 00:49:06.435 23:30:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:49:06.435 Remove shared memory files 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:49:06.436 23:30:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:49:06.436 00:49:06.436 real 4m56.186s 00:49:06.436 user 4m42.125s 00:49:06.436 sys 0m13.593s 00:49:06.436 23:30:46 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:06.436 23:30:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:49:06.436 ************************************ 00:49:06.436 END TEST ftl_restore 00:49:06.436 ************************************ 00:49:06.436 23:30:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:49:06.436 23:30:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:49:06.436 23:30:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:06.436 23:30:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:49:06.436 ************************************ 00:49:06.436 START TEST ftl_dirty_shutdown 00:49:06.436 ************************************ 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:49:06.436 * Looking for test storage... 00:49:06.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.436 --rc genhtml_branch_coverage=1 00:49:06.436 --rc genhtml_function_coverage=1 00:49:06.436 --rc genhtml_legend=1 00:49:06.436 --rc geninfo_all_blocks=1 00:49:06.436 --rc geninfo_unexecuted_blocks=1 00:49:06.436 00:49:06.436 ' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.436 --rc genhtml_branch_coverage=1 00:49:06.436 --rc genhtml_function_coverage=1 00:49:06.436 --rc genhtml_legend=1 00:49:06.436 --rc geninfo_all_blocks=1 00:49:06.436 --rc geninfo_unexecuted_blocks=1 00:49:06.436 00:49:06.436 ' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.436 --rc genhtml_branch_coverage=1 00:49:06.436 --rc genhtml_function_coverage=1 00:49:06.436 --rc genhtml_legend=1 00:49:06.436 --rc geninfo_all_blocks=1 00:49:06.436 --rc geninfo_unexecuted_blocks=1 00:49:06.436 00:49:06.436 ' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.436 --rc genhtml_branch_coverage=1 00:49:06.436 --rc genhtml_function_coverage=1 00:49:06.436 --rc genhtml_legend=1 00:49:06.436 --rc geninfo_all_blocks=1 00:49:06.436 --rc geninfo_unexecuted_blocks=1 00:49:06.436 00:49:06.436 ' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80481 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80481 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80481 ']' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:49:06.436 23:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:49:06.437 [2024-12-09 23:30:46.910531] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:49:06.437 [2024-12-09 23:30:46.910651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80481 ] 00:49:06.697 [2024-12-09 23:30:47.071127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:06.697 [2024-12-09 23:30:47.179244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:49:07.268 23:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:49:07.839 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:07.839 { 00:49:07.839 "name": "nvme0n1", 00:49:07.839 "aliases": [ 00:49:07.839 "238073da-4d1e-4d56-aa9b-7c983528d45b" 00:49:07.839 ], 00:49:07.839 "product_name": "NVMe disk", 00:49:07.839 "block_size": 4096, 00:49:07.839 "num_blocks": 1310720, 00:49:07.839 "uuid": "238073da-4d1e-4d56-aa9b-7c983528d45b", 00:49:07.839 "numa_id": -1, 00:49:07.839 "assigned_rate_limits": { 00:49:07.839 "rw_ios_per_sec": 0, 00:49:07.839 "rw_mbytes_per_sec": 0, 00:49:07.839 "r_mbytes_per_sec": 0, 00:49:07.839 "w_mbytes_per_sec": 0 00:49:07.839 }, 00:49:07.839 "claimed": true, 00:49:07.839 "claim_type": "read_many_write_one", 00:49:07.839 "zoned": false, 00:49:07.839 "supported_io_types": { 00:49:07.839 "read": true, 00:49:07.839 "write": true, 00:49:07.839 "unmap": true, 00:49:07.839 "flush": true, 00:49:07.839 "reset": true, 00:49:07.839 "nvme_admin": true, 00:49:07.839 "nvme_io": true, 00:49:07.839 "nvme_io_md": false, 00:49:07.839 "write_zeroes": true, 00:49:07.839 "zcopy": false, 00:49:07.840 "get_zone_info": false, 00:49:07.840 "zone_management": false, 00:49:07.840 "zone_append": false, 00:49:07.840 "compare": true, 00:49:07.840 "compare_and_write": false, 00:49:07.840 "abort": true, 00:49:07.840 "seek_hole": false, 00:49:07.840 "seek_data": false, 00:49:07.840 "copy": true, 00:49:07.840 "nvme_iov_md": false 00:49:07.840 }, 00:49:07.840 "driver_specific": { 00:49:07.840 "nvme": [ 00:49:07.840 { 00:49:07.840 "pci_address": "0000:00:11.0", 00:49:07.840 "trid": { 00:49:07.840 "trtype": "PCIe", 00:49:07.840 "traddr": "0000:00:11.0" 00:49:07.840 }, 00:49:07.840 "ctrlr_data": { 00:49:07.840 "cntlid": 0, 00:49:07.840 "vendor_id": "0x1b36", 00:49:07.840 "model_number": "QEMU NVMe Ctrl", 00:49:07.840 "serial_number": "12341", 00:49:07.840 "firmware_revision": "8.0.0", 00:49:07.840 "subnqn": "nqn.2019-08.org.qemu:12341", 00:49:07.840 "oacs": { 00:49:07.840 "security": 0, 00:49:07.840 "format": 1, 00:49:07.840 "firmware": 0, 00:49:07.840 "ns_manage": 1 00:49:07.840 }, 00:49:07.840 "multi_ctrlr": false, 00:49:07.840 "ana_reporting": false 00:49:07.840 }, 00:49:07.840 "vs": { 00:49:07.840 "nvme_version": "1.4" 00:49:07.840 }, 00:49:07.840 "ns_data": { 00:49:07.840 "id": 1, 00:49:07.840 "can_share": false 00:49:07.840 } 00:49:07.840 } 00:49:07.840 ], 00:49:07.840 "mp_policy": "active_passive" 00:49:07.840 } 00:49:07.840 } 00:49:07.840 ]' 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:49:07.840 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:49:08.101 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=301a8f3f-17b8-4408-8a05-e317da6c2e3e 00:49:08.101 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:49:08.101 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 301a8f3f-17b8-4408-8a05-e317da6c2e3e 00:49:08.362 23:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:49:08.622 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5814b886-6418-4c33-8d12-ba9c74cecda3 00:49:08.623 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5814b886-6418-4c33-8d12-ba9c74cecda3 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4edb6d1f-d246-4e13-8587-982660fc5185 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4edb6d1f-d246-4e13-8587-982660fc5185 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4edb6d1f-d246-4e13-8587-982660fc5185 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:49:08.884 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:09.145 { 00:49:09.145 "name": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.145 "aliases": [ 00:49:09.145 "lvs/nvme0n1p0" 00:49:09.145 ], 00:49:09.145 "product_name": "Logical Volume", 00:49:09.145 "block_size": 4096, 00:49:09.145 "num_blocks": 26476544, 00:49:09.145 "uuid": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.145 "assigned_rate_limits": { 00:49:09.145 "rw_ios_per_sec": 0, 00:49:09.145 "rw_mbytes_per_sec": 0, 00:49:09.145 "r_mbytes_per_sec": 0, 00:49:09.145 "w_mbytes_per_sec": 0 00:49:09.145 }, 00:49:09.145 "claimed": false, 00:49:09.145 "zoned": false, 00:49:09.145 "supported_io_types": { 00:49:09.145 "read": true, 00:49:09.145 "write": true, 00:49:09.145 "unmap": true, 00:49:09.145 "flush": false, 00:49:09.145 "reset": true, 00:49:09.145 "nvme_admin": false, 00:49:09.145 "nvme_io": false, 00:49:09.145 "nvme_io_md": false, 00:49:09.145 "write_zeroes": true, 00:49:09.145 "zcopy": false, 00:49:09.145 "get_zone_info": false, 00:49:09.145 "zone_management": false, 00:49:09.145 "zone_append": false, 00:49:09.145 "compare": false, 00:49:09.145 "compare_and_write": false, 00:49:09.145 "abort": false, 00:49:09.145 "seek_hole": true, 00:49:09.145 "seek_data": true, 00:49:09.145 "copy": false, 00:49:09.145 "nvme_iov_md": false 00:49:09.145 }, 00:49:09.145 "driver_specific": { 00:49:09.145 "lvol": { 00:49:09.145 "lvol_store_uuid": "5814b886-6418-4c33-8d12-ba9c74cecda3", 00:49:09.145 "base_bdev": "nvme0n1", 00:49:09.145 "thin_provision": true, 00:49:09.145 "num_allocated_clusters": 0, 00:49:09.145 "snapshot": false, 00:49:09.145 "clone": false, 00:49:09.145 "esnap_clone": false 00:49:09.145 } 00:49:09.145 } 00:49:09.145 } 00:49:09.145 ]' 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:49:09.145 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:49:09.407 23:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.668 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:09.668 { 00:49:09.668 "name": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.668 "aliases": [ 00:49:09.668 "lvs/nvme0n1p0" 00:49:09.668 ], 00:49:09.669 "product_name": "Logical Volume", 00:49:09.669 "block_size": 4096, 00:49:09.669 "num_blocks": 26476544, 00:49:09.669 "uuid": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.669 "assigned_rate_limits": { 00:49:09.669 "rw_ios_per_sec": 0, 00:49:09.669 "rw_mbytes_per_sec": 0, 00:49:09.669 "r_mbytes_per_sec": 0, 00:49:09.669 "w_mbytes_per_sec": 0 00:49:09.669 }, 00:49:09.669 "claimed": false, 00:49:09.669 "zoned": false, 00:49:09.669 "supported_io_types": { 00:49:09.669 "read": true, 00:49:09.669 "write": true, 00:49:09.669 "unmap": true, 00:49:09.669 "flush": false, 00:49:09.669 "reset": true, 00:49:09.669 "nvme_admin": false, 00:49:09.669 "nvme_io": false, 00:49:09.669 "nvme_io_md": false, 00:49:09.669 "write_zeroes": true, 00:49:09.669 "zcopy": false, 00:49:09.669 "get_zone_info": false, 00:49:09.669 "zone_management": false, 00:49:09.669 "zone_append": false, 00:49:09.669 "compare": false, 00:49:09.669 "compare_and_write": false, 00:49:09.669 "abort": false, 00:49:09.669 "seek_hole": true, 00:49:09.669 "seek_data": true, 00:49:09.669 "copy": false, 00:49:09.669 "nvme_iov_md": false 00:49:09.669 }, 00:49:09.669 "driver_specific": { 00:49:09.669 "lvol": { 00:49:09.669 "lvol_store_uuid": "5814b886-6418-4c33-8d12-ba9c74cecda3", 00:49:09.669 "base_bdev": "nvme0n1", 00:49:09.669 "thin_provision": true, 00:49:09.669 "num_allocated_clusters": 0, 00:49:09.669 "snapshot": false, 00:49:09.669 "clone": false, 00:49:09.669 "esnap_clone": false 00:49:09.669 } 00:49:09.669 } 00:49:09.669 } 00:49:09.669 ]' 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:49:09.669 23:30:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4edb6d1f-d246-4e13-8587-982660fc5185 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:09.929 { 00:49:09.929 "name": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.929 "aliases": [ 00:49:09.929 "lvs/nvme0n1p0" 00:49:09.929 ], 00:49:09.929 "product_name": "Logical Volume", 00:49:09.929 "block_size": 4096, 00:49:09.929 "num_blocks": 26476544, 00:49:09.929 "uuid": "4edb6d1f-d246-4e13-8587-982660fc5185", 00:49:09.929 "assigned_rate_limits": { 00:49:09.929 "rw_ios_per_sec": 0, 00:49:09.929 "rw_mbytes_per_sec": 0, 00:49:09.929 "r_mbytes_per_sec": 0, 00:49:09.929 "w_mbytes_per_sec": 0 00:49:09.929 }, 00:49:09.929 "claimed": false, 00:49:09.929 "zoned": false, 00:49:09.929 "supported_io_types": { 00:49:09.929 "read": true, 00:49:09.929 "write": true, 00:49:09.929 "unmap": true, 00:49:09.929 "flush": false, 00:49:09.929 "reset": true, 00:49:09.929 "nvme_admin": false, 00:49:09.929 "nvme_io": false, 00:49:09.929 "nvme_io_md": false, 00:49:09.929 "write_zeroes": true, 00:49:09.929 "zcopy": false, 00:49:09.929 "get_zone_info": false, 00:49:09.929 "zone_management": false, 00:49:09.929 "zone_append": false, 00:49:09.929 "compare": false, 00:49:09.929 "compare_and_write": false, 00:49:09.929 "abort": false, 00:49:09.929 "seek_hole": true, 00:49:09.929 "seek_data": true, 00:49:09.929 "copy": false, 00:49:09.929 "nvme_iov_md": false 00:49:09.929 }, 00:49:09.929 "driver_specific": { 00:49:09.929 "lvol": { 00:49:09.929 "lvol_store_uuid": "5814b886-6418-4c33-8d12-ba9c74cecda3", 00:49:09.929 "base_bdev": "nvme0n1", 00:49:09.929 "thin_provision": true, 00:49:09.929 "num_allocated_clusters": 0, 00:49:09.929 "snapshot": false, 00:49:09.929 "clone": false, 00:49:09.929 "esnap_clone": false 00:49:09.929 } 00:49:09.929 } 00:49:09.929 } 00:49:09.929 ]' 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:49:09.929 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4edb6d1f-d246-4e13-8587-982660fc5185 --l2p_dram_limit 10' 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:49:10.190 23:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4edb6d1f-d246-4e13-8587-982660fc5185 --l2p_dram_limit 10 -c nvc0n1p0 00:49:10.190 [2024-12-09 23:30:50.778689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.190 [2024-12-09 23:30:50.778729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:10.190 [2024-12-09 23:30:50.778741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:10.190 [2024-12-09 23:30:50.778747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.190 [2024-12-09 23:30:50.778792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.190 [2024-12-09 23:30:50.778799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:10.190 [2024-12-09 23:30:50.778807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:49:10.190 [2024-12-09 23:30:50.778813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.190 [2024-12-09 23:30:50.778833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:10.191 [2024-12-09 23:30:50.779416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:10.191 [2024-12-09 23:30:50.779439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.779446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:10.191 [2024-12-09 23:30:50.779454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:49:10.191 [2024-12-09 23:30:50.779460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.779511] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e 00:49:10.191 [2024-12-09 23:30:50.780446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.780471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:49:10.191 [2024-12-09 23:30:50.780479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:49:10.191 [2024-12-09 23:30:50.780486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.785070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.785102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:10.191 [2024-12-09 23:30:50.785110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:49:10.191 [2024-12-09 23:30:50.785117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.785182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.785190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:10.191 [2024-12-09 23:30:50.785196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:49:10.191 [2024-12-09 23:30:50.785205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.785250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.785264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:10.191 [2024-12-09 23:30:50.785272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:10.191 [2024-12-09 23:30:50.785279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.785296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:10.191 [2024-12-09 23:30:50.788182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.788208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:10.191 [2024-12-09 23:30:50.788218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:49:10.191 [2024-12-09 23:30:50.788223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.788251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.788257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:10.191 [2024-12-09 23:30:50.788265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:10.191 [2024-12-09 23:30:50.788271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.788285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:49:10.191 [2024-12-09 23:30:50.788393] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:10.191 [2024-12-09 23:30:50.788405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:10.191 [2024-12-09 23:30:50.788414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:10.191 [2024-12-09 23:30:50.788423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788437] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:10.191 [2024-12-09 23:30:50.788443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:10.191 [2024-12-09 23:30:50.788452] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:10.191 [2024-12-09 23:30:50.788458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:10.191 [2024-12-09 23:30:50.788464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.788475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:10.191 [2024-12-09 23:30:50.788482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:49:10.191 [2024-12-09 23:30:50.788487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.788554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.191 [2024-12-09 23:30:50.788564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:10.191 [2024-12-09 23:30:50.788572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:49:10.191 [2024-12-09 23:30:50.788577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.191 [2024-12-09 23:30:50.788653] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:10.191 [2024-12-09 23:30:50.788666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:10.191 [2024-12-09 23:30:50.788674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:10.191 [2024-12-09 23:30:50.788692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:10.191 [2024-12-09 23:30:50.788711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:10.191 [2024-12-09 23:30:50.788723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:10.191 [2024-12-09 23:30:50.788728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:10.191 [2024-12-09 23:30:50.788736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:10.191 [2024-12-09 23:30:50.788741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:10.191 [2024-12-09 23:30:50.788747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:49:10.191 [2024-12-09 23:30:50.788754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:10.191 [2024-12-09 23:30:50.788767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:10.191 [2024-12-09 23:30:50.788785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:10.191 [2024-12-09 23:30:50.788801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:10.191 [2024-12-09 23:30:50.788818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:10.191 [2024-12-09 23:30:50.788835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:10.191 [2024-12-09 23:30:50.788854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:10.191 [2024-12-09 23:30:50.788865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:10.191 [2024-12-09 23:30:50.788870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:49:10.191 [2024-12-09 23:30:50.788876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:10.191 [2024-12-09 23:30:50.788881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:10.191 [2024-12-09 23:30:50.788889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:49:10.191 [2024-12-09 23:30:50.788894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:10.191 [2024-12-09 23:30:50.788905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:49:10.191 [2024-12-09 23:30:50.788911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788916] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:10.191 [2024-12-09 23:30:50.788923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:10.191 [2024-12-09 23:30:50.788928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:10.191 [2024-12-09 23:30:50.788941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:10.191 [2024-12-09 23:30:50.788949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:10.191 [2024-12-09 23:30:50.788954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:10.191 [2024-12-09 23:30:50.788960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:10.191 [2024-12-09 23:30:50.788965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:10.191 [2024-12-09 23:30:50.788972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:10.191 [2024-12-09 23:30:50.788978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:10.191 [2024-12-09 23:30:50.788998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:10.191 [2024-12-09 23:30:50.789005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:10.192 [2024-12-09 23:30:50.789012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:49:10.192 [2024-12-09 23:30:50.789017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:49:10.192 [2024-12-09 23:30:50.789023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:49:10.192 [2024-12-09 23:30:50.789029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:49:10.192 [2024-12-09 23:30:50.789036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:49:10.192 [2024-12-09 23:30:50.789041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:49:10.192 [2024-12-09 23:30:50.789048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:49:10.192 [2024-12-09 23:30:50.789053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:49:10.192 [2024-12-09 23:30:50.789062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:49:10.192 [2024-12-09 23:30:50.789092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:10.192 [2024-12-09 23:30:50.789100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:10.192 [2024-12-09 23:30:50.789113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:10.192 [2024-12-09 23:30:50.789118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:10.192 [2024-12-09 23:30:50.789125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:10.192 [2024-12-09 23:30:50.789131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:10.192 [2024-12-09 23:30:50.789137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:10.192 [2024-12-09 23:30:50.789143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:49:10.192 [2024-12-09 23:30:50.789150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:10.192 [2024-12-09 23:30:50.789190] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:49:10.192 [2024-12-09 23:30:50.789201] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:49:14.399 [2024-12-09 23:30:54.751354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.751445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:49:14.399 [2024-12-09 23:30:54.751462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3962.146 ms 00:49:14.399 [2024-12-09 23:30:54.751474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.783851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.783921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:14.399 [2024-12-09 23:30:54.783937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.125 ms 00:49:14.399 [2024-12-09 23:30:54.783949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.784143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.784164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:14.399 [2024-12-09 23:30:54.784175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:49:14.399 [2024-12-09 23:30:54.784192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.819456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.819515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:14.399 [2024-12-09 23:30:54.819526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.220 ms 00:49:14.399 [2024-12-09 23:30:54.819537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.819572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.819587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:14.399 [2024-12-09 23:30:54.819597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:49:14.399 [2024-12-09 23:30:54.819615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.820244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.820285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:14.399 [2024-12-09 23:30:54.820296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:49:14.399 [2024-12-09 23:30:54.820307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.820423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.820435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:14.399 [2024-12-09 23:30:54.820447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:49:14.399 [2024-12-09 23:30:54.820460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.837810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.837869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:14.399 [2024-12-09 23:30:54.837881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.330 ms 00:49:14.399 [2024-12-09 23:30:54.837891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.860870] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:49:14.399 [2024-12-09 23:30:54.865177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.865233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:14.399 [2024-12-09 23:30:54.865254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.153 ms 00:49:14.399 [2024-12-09 23:30:54.865267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.967721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.967788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:49:14.399 [2024-12-09 23:30:54.967807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.394 ms 00:49:14.399 [2024-12-09 23:30:54.967816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.968054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.968071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:14.399 [2024-12-09 23:30:54.968086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:49:14.399 [2024-12-09 23:30:54.968095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:54.994035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:54.994085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:49:14.399 [2024-12-09 23:30:54.994101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:49:14.399 [2024-12-09 23:30:54.994110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:55.019337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:55.019391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:49:14.399 [2024-12-09 23:30:55.019407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:49:14.399 [2024-12-09 23:30:55.019414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.399 [2024-12-09 23:30:55.020043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.399 [2024-12-09 23:30:55.020064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:14.400 [2024-12-09 23:30:55.020077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:49:14.400 [2024-12-09 23:30:55.020088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.107751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.107808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:49:14.661 [2024-12-09 23:30:55.107828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.600 ms 00:49:14.661 [2024-12-09 23:30:55.107837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.135860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.135919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:49:14.661 [2024-12-09 23:30:55.135935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.918 ms 00:49:14.661 [2024-12-09 23:30:55.135944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.163119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.163177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:49:14.661 [2024-12-09 23:30:55.163192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.101 ms 00:49:14.661 [2024-12-09 23:30:55.163200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.190538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.190596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:14.661 [2024-12-09 23:30:55.190613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.277 ms 00:49:14.661 [2024-12-09 23:30:55.190621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.190681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.190692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:14.661 [2024-12-09 23:30:55.190711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:14.661 [2024-12-09 23:30:55.190724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.190855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.661 [2024-12-09 23:30:55.190898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:14.661 [2024-12-09 23:30:55.190917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:49:14.661 [2024-12-09 23:30:55.190927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.661 [2024-12-09 23:30:55.192187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4412.925 ms, result 0 00:49:14.661 { 00:49:14.661 "name": "ftl0", 00:49:14.661 "uuid": "88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e" 00:49:14.661 } 00:49:14.661 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:49:14.661 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:49:14.922 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:49:14.922 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:49:14.922 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:49:15.181 /dev/nbd0 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:49:15.181 1+0 records in 00:49:15.181 1+0 records out 00:49:15.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006945 s, 5.9 MB/s 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:49:15.181 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:49:15.182 23:30:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:49:15.182 23:30:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:49:15.182 [2024-12-09 23:30:55.741573] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:49:15.182 [2024-12-09 23:30:55.741668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80632 ] 00:49:15.442 [2024-12-09 23:30:55.908817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:15.703 [2024-12-09 23:30:56.085327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:17.088  [2024-12-09T23:30:58.666Z] Copying: 187/1024 [MB] (187 MBps) [2024-12-09T23:30:59.608Z] Copying: 374/1024 [MB] (186 MBps) [2024-12-09T23:31:00.553Z] Copying: 560/1024 [MB] (186 MBps) [2024-12-09T23:31:01.489Z] Copying: 747/1024 [MB] (186 MBps) [2024-12-09T23:31:01.489Z] Copying: 993/1024 [MB] (246 MBps) [2024-12-09T23:31:02.087Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:49:21.451 00:49:21.713 23:31:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:49:23.618 23:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:49:23.618 [2024-12-09 23:31:03.965400] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:49:23.618 [2024-12-09 23:31:03.965498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80720 ] 00:49:23.618 [2024-12-09 23:31:04.121163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:23.618 [2024-12-09 23:31:04.227626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:25.004  [2024-12-09T23:31:06.577Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T23:31:07.521Z] Copying: 44/1024 [MB] (23 MBps) [2024-12-09T23:31:08.903Z] Copying: 66/1024 [MB] (21 MBps) [2024-12-09T23:31:09.474Z] Copying: 92/1024 [MB] (25 MBps) [2024-12-09T23:31:10.857Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-09T23:31:11.798Z] Copying: 140/1024 [MB] (24 MBps) [2024-12-09T23:31:12.739Z] Copying: 165/1024 [MB] (24 MBps) [2024-12-09T23:31:13.682Z] Copying: 197/1024 [MB] (32 MBps) [2024-12-09T23:31:14.624Z] Copying: 221/1024 [MB] (24 MBps) [2024-12-09T23:31:15.566Z] Copying: 245/1024 [MB] (23 MBps) [2024-12-09T23:31:16.500Z] Copying: 268/1024 [MB] (23 MBps) [2024-12-09T23:31:17.873Z] Copying: 286/1024 [MB] (18 MBps) [2024-12-09T23:31:18.807Z] Copying: 300/1024 [MB] (13 MBps) [2024-12-09T23:31:19.740Z] Copying: 320/1024 [MB] (19 MBps) [2024-12-09T23:31:20.674Z] Copying: 344/1024 [MB] (23 MBps) [2024-12-09T23:31:21.652Z] Copying: 378/1024 [MB] (34 MBps) [2024-12-09T23:31:22.587Z] Copying: 413/1024 [MB] (34 MBps) [2024-12-09T23:31:23.521Z] Copying: 448/1024 [MB] (35 MBps) [2024-12-09T23:31:24.895Z] Copying: 482/1024 [MB] (33 MBps) [2024-12-09T23:31:25.829Z] Copying: 499/1024 [MB] (16 MBps) [2024-12-09T23:31:26.764Z] Copying: 520/1024 [MB] (20 MBps) [2024-12-09T23:31:27.697Z] Copying: 540/1024 [MB] (20 MBps) [2024-12-09T23:31:28.631Z] Copying: 560/1024 [MB] (20 MBps) [2024-12-09T23:31:29.565Z] Copying: 581/1024 [MB] (21 MBps) [2024-12-09T23:31:30.499Z] Copying: 602/1024 [MB] (20 MBps) [2024-12-09T23:31:31.873Z] Copying: 622/1024 [MB] (19 MBps) [2024-12-09T23:31:32.807Z] Copying: 640/1024 [MB] (17 MBps) [2024-12-09T23:31:33.742Z] Copying: 662/1024 [MB] (22 MBps) [2024-12-09T23:31:34.676Z] Copying: 695/1024 [MB] (33 MBps) [2024-12-09T23:31:35.619Z] Copying: 710/1024 [MB] (15 MBps) [2024-12-09T23:31:36.553Z] Copying: 726/1024 [MB] (15 MBps) [2024-12-09T23:31:37.485Z] Copying: 739/1024 [MB] (12 MBps) [2024-12-09T23:31:38.858Z] Copying: 753/1024 [MB] (14 MBps) [2024-12-09T23:31:39.794Z] Copying: 773/1024 [MB] (19 MBps) [2024-12-09T23:31:40.727Z] Copying: 800/1024 [MB] (27 MBps) [2024-12-09T23:31:41.661Z] Copying: 820/1024 [MB] (19 MBps) [2024-12-09T23:31:42.594Z] Copying: 840/1024 [MB] (19 MBps) [2024-12-09T23:31:43.528Z] Copying: 863/1024 [MB] (23 MBps) [2024-12-09T23:31:44.469Z] Copying: 887/1024 [MB] (24 MBps) [2024-12-09T23:31:45.898Z] Copying: 918/1024 [MB] (31 MBps) [2024-12-09T23:31:46.832Z] Copying: 943/1024 [MB] (24 MBps) [2024-12-09T23:31:47.766Z] Copying: 969/1024 [MB] (25 MBps) [2024-12-09T23:31:48.700Z] Copying: 984/1024 [MB] (15 MBps) [2024-12-09T23:31:49.267Z] Copying: 1004/1024 [MB] (19 MBps) [2024-12-09T23:31:49.836Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:50:09.200 00:50:09.200 23:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:50:09.200 23:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:50:09.462 23:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:50:09.462 [2024-12-09 23:31:50.030701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.030742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:09.462 [2024-12-09 23:31:50.030753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:09.462 [2024-12-09 23:31:50.030761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.030782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:09.462 [2024-12-09 23:31:50.032862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.032889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:09.462 [2024-12-09 23:31:50.032898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:50:09.462 [2024-12-09 23:31:50.032905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.034721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.034751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:09.462 [2024-12-09 23:31:50.034760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:50:09.462 [2024-12-09 23:31:50.034766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.048127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.048156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:09.462 [2024-12-09 23:31:50.048166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.341 ms 00:50:09.462 [2024-12-09 23:31:50.048173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.053053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.053078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:09.462 [2024-12-09 23:31:50.053087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:50:09.462 [2024-12-09 23:31:50.053093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.071121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.071149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:09.462 [2024-12-09 23:31:50.071159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:50:09.462 [2024-12-09 23:31:50.071165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.083484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.083512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:09.462 [2024-12-09 23:31:50.083524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:50:09.462 [2024-12-09 23:31:50.083531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.462 [2024-12-09 23:31:50.083637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.462 [2024-12-09 23:31:50.083644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:09.462 [2024-12-09 23:31:50.083653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:50:09.462 [2024-12-09 23:31:50.083659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.725 [2024-12-09 23:31:50.101377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.725 [2024-12-09 23:31:50.101403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:09.725 [2024-12-09 23:31:50.101413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.701 ms 00:50:09.725 [2024-12-09 23:31:50.101419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.725 [2024-12-09 23:31:50.118906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.725 [2024-12-09 23:31:50.118932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:09.725 [2024-12-09 23:31:50.118942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.458 ms 00:50:09.725 [2024-12-09 23:31:50.118947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.725 [2024-12-09 23:31:50.136346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.725 [2024-12-09 23:31:50.136373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:09.725 [2024-12-09 23:31:50.136382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.367 ms 00:50:09.725 [2024-12-09 23:31:50.136388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.725 [2024-12-09 23:31:50.153318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.725 [2024-12-09 23:31:50.153345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:09.725 [2024-12-09 23:31:50.153355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.873 ms 00:50:09.725 [2024-12-09 23:31:50.153360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.725 [2024-12-09 23:31:50.153388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:09.725 [2024-12-09 23:31:50.153400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.153995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.154002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.154008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:09.726 [2024-12-09 23:31:50.154015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:09.727 [2024-12-09 23:31:50.154078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:09.727 [2024-12-09 23:31:50.154092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e 00:50:09.727 [2024-12-09 23:31:50.154098] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:09.727 [2024-12-09 23:31:50.154106] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:09.727 [2024-12-09 23:31:50.154113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:09.727 [2024-12-09 23:31:50.154120] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:09.727 [2024-12-09 23:31:50.154126] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:09.727 [2024-12-09 23:31:50.154132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:09.727 [2024-12-09 23:31:50.154138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:09.727 [2024-12-09 23:31:50.154144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:09.727 [2024-12-09 23:31:50.154149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:09.727 [2024-12-09 23:31:50.154155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.727 [2024-12-09 23:31:50.154161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:09.727 [2024-12-09 23:31:50.154169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:50:09.727 [2024-12-09 23:31:50.154175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.163454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.727 [2024-12-09 23:31:50.163482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:09.727 [2024-12-09 23:31:50.163490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.252 ms 00:50:09.727 [2024-12-09 23:31:50.163496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.163766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:09.727 [2024-12-09 23:31:50.163780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:09.727 [2024-12-09 23:31:50.163788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:50:09.727 [2024-12-09 23:31:50.163795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.196822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.196850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:09.727 [2024-12-09 23:31:50.196860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.196866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.196910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.196916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:09.727 [2024-12-09 23:31:50.196923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.196929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.197018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.197029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:09.727 [2024-12-09 23:31:50.197037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.197042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.197059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.197067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:09.727 [2024-12-09 23:31:50.197075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.197081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.256900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.256935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:09.727 [2024-12-09 23:31:50.256945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.256951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:09.727 [2024-12-09 23:31:50.305236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:09.727 [2024-12-09 23:31:50.305314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:09.727 [2024-12-09 23:31:50.305382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:09.727 [2024-12-09 23:31:50.305473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:09.727 [2024-12-09 23:31:50.305520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:09.727 [2024-12-09 23:31:50.305569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:09.727 [2024-12-09 23:31:50.305619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:09.727 [2024-12-09 23:31:50.305626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:09.727 [2024-12-09 23:31:50.305632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:09.727 [2024-12-09 23:31:50.305747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.005 ms, result 0 00:50:09.727 true 00:50:09.727 23:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80481 00:50:09.727 23:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80481 00:50:09.727 23:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:50:09.988 [2024-12-09 23:31:50.398170] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:50:09.988 [2024-12-09 23:31:50.398285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81203 ] 00:50:09.988 [2024-12-09 23:31:50.555208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:10.249 [2024-12-09 23:31:50.651151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:11.638  [2024-12-09T23:31:53.216Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-09T23:31:54.156Z] Copying: 387/1024 [MB] (195 MBps) [2024-12-09T23:31:55.097Z] Copying: 646/1024 [MB] (258 MBps) [2024-12-09T23:31:55.668Z] Copying: 900/1024 [MB] (253 MBps) [2024-12-09T23:31:55.927Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:50:15.291 00:50:15.552 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80481 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:50:15.552 23:31:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:15.552 [2024-12-09 23:31:55.997374] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:50:15.552 [2024-12-09 23:31:55.997493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81269 ] 00:50:15.552 [2024-12-09 23:31:56.159325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:15.813 [2024-12-09 23:31:56.254690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:16.073 [2024-12-09 23:31:56.514488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:16.073 [2024-12-09 23:31:56.514554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:16.073 [2024-12-09 23:31:56.579033] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:50:16.073 [2024-12-09 23:31:56.579647] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:50:16.073 [2024-12-09 23:31:56.580310] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:50:16.644 [2024-12-09 23:31:57.109176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.109233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:16.644 [2024-12-09 23:31:57.109248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:16.644 [2024-12-09 23:31:57.109259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.109314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.109325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:16.644 [2024-12-09 23:31:57.109334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:50:16.644 [2024-12-09 23:31:57.109342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.109363] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:16.644 [2024-12-09 23:31:57.110418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:16.644 [2024-12-09 23:31:57.110474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.110485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:16.644 [2024-12-09 23:31:57.110495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:50:16.644 [2024-12-09 23:31:57.110504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.112192] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:16.644 [2024-12-09 23:31:57.126119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.126168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:16.644 [2024-12-09 23:31:57.126181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.930 ms 00:50:16.644 [2024-12-09 23:31:57.126190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.126268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.126279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:16.644 [2024-12-09 23:31:57.126288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:50:16.644 [2024-12-09 23:31:57.126297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.134299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.134344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:16.644 [2024-12-09 23:31:57.134355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.923 ms 00:50:16.644 [2024-12-09 23:31:57.134363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.134448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.134458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:16.644 [2024-12-09 23:31:57.134467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:16.644 [2024-12-09 23:31:57.134475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.134522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.134533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:16.644 [2024-12-09 23:31:57.134542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:50:16.644 [2024-12-09 23:31:57.134551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.134573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:16.644 [2024-12-09 23:31:57.138493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.138534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:16.644 [2024-12-09 23:31:57.138544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:50:16.644 [2024-12-09 23:31:57.138552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.138591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.138600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:16.644 [2024-12-09 23:31:57.138609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:50:16.644 [2024-12-09 23:31:57.138618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.138674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:16.644 [2024-12-09 23:31:57.138697] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:16.644 [2024-12-09 23:31:57.138733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:16.644 [2024-12-09 23:31:57.138750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:16.644 [2024-12-09 23:31:57.138857] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:16.644 [2024-12-09 23:31:57.138869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:16.644 [2024-12-09 23:31:57.138880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:16.644 [2024-12-09 23:31:57.138893] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:16.644 [2024-12-09 23:31:57.138902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:16.644 [2024-12-09 23:31:57.138910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:16.644 [2024-12-09 23:31:57.138921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:16.644 [2024-12-09 23:31:57.138930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:16.644 [2024-12-09 23:31:57.138939] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:16.644 [2024-12-09 23:31:57.138947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.138955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:16.644 [2024-12-09 23:31:57.138964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:50:16.644 [2024-12-09 23:31:57.138971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.139071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.644 [2024-12-09 23:31:57.139092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:16.644 [2024-12-09 23:31:57.139101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:50:16.644 [2024-12-09 23:31:57.139109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.644 [2024-12-09 23:31:57.139214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:16.644 [2024-12-09 23:31:57.139233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:16.644 [2024-12-09 23:31:57.139241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:16.644 [2024-12-09 23:31:57.139267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:16.644 [2024-12-09 23:31:57.139289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:16.644 [2024-12-09 23:31:57.139309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:16.644 [2024-12-09 23:31:57.139317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:16.644 [2024-12-09 23:31:57.139326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:16.644 [2024-12-09 23:31:57.139334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:16.644 [2024-12-09 23:31:57.139341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:16.644 [2024-12-09 23:31:57.139348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:16.644 [2024-12-09 23:31:57.139361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:16.644 [2024-12-09 23:31:57.139381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:16.644 [2024-12-09 23:31:57.139401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:16.644 [2024-12-09 23:31:57.139421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:16.644 [2024-12-09 23:31:57.139440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:16.644 [2024-12-09 23:31:57.139446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:16.644 [2024-12-09 23:31:57.139453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:16.644 [2024-12-09 23:31:57.139459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:16.645 [2024-12-09 23:31:57.139466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:16.645 [2024-12-09 23:31:57.139473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:16.645 [2024-12-09 23:31:57.139480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:16.645 [2024-12-09 23:31:57.139486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:16.645 [2024-12-09 23:31:57.139494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:16.645 [2024-12-09 23:31:57.139500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:16.645 [2024-12-09 23:31:57.139506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.645 [2024-12-09 23:31:57.139515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:16.645 [2024-12-09 23:31:57.139522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:16.645 [2024-12-09 23:31:57.139528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.645 [2024-12-09 23:31:57.139534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:16.645 [2024-12-09 23:31:57.139542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:16.645 [2024-12-09 23:31:57.139554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:16.645 [2024-12-09 23:31:57.139562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:16.645 [2024-12-09 23:31:57.139570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:16.645 [2024-12-09 23:31:57.139577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:16.645 [2024-12-09 23:31:57.139583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:16.645 [2024-12-09 23:31:57.139590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:16.645 [2024-12-09 23:31:57.139597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:16.645 [2024-12-09 23:31:57.139603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:16.645 [2024-12-09 23:31:57.139611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:16.645 [2024-12-09 23:31:57.139620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:16.645 [2024-12-09 23:31:57.139636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:16.645 [2024-12-09 23:31:57.139644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:16.645 [2024-12-09 23:31:57.139651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:16.645 [2024-12-09 23:31:57.139658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:16.645 [2024-12-09 23:31:57.139664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:16.645 [2024-12-09 23:31:57.139671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:16.645 [2024-12-09 23:31:57.139678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:16.645 [2024-12-09 23:31:57.139685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:16.645 [2024-12-09 23:31:57.139691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:16.645 [2024-12-09 23:31:57.139729] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:16.645 [2024-12-09 23:31:57.139737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:16.645 [2024-12-09 23:31:57.139752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:16.645 [2024-12-09 23:31:57.139759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:16.645 [2024-12-09 23:31:57.139767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:16.645 [2024-12-09 23:31:57.139775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.139783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:16.645 [2024-12-09 23:31:57.139796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:50:16.645 [2024-12-09 23:31:57.139804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.171301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.171456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:16.645 [2024-12-09 23:31:57.171473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.451 ms 00:50:16.645 [2024-12-09 23:31:57.171482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.171575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.171584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:16.645 [2024-12-09 23:31:57.171592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:16.645 [2024-12-09 23:31:57.171600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.219410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.219455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:16.645 [2024-12-09 23:31:57.219472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.755 ms 00:50:16.645 [2024-12-09 23:31:57.219481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.219531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.219541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:16.645 [2024-12-09 23:31:57.219550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:16.645 [2024-12-09 23:31:57.219557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.219970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.220012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:16.645 [2024-12-09 23:31:57.220022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:50:16.645 [2024-12-09 23:31:57.220036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.220185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.220195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:16.645 [2024-12-09 23:31:57.220203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:50:16.645 [2024-12-09 23:31:57.220211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.234213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.234244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:16.645 [2024-12-09 23:31:57.234254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.982 ms 00:50:16.645 [2024-12-09 23:31:57.234262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.645 [2024-12-09 23:31:57.247659] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:16.645 [2024-12-09 23:31:57.247695] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:16.645 [2024-12-09 23:31:57.247708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.645 [2024-12-09 23:31:57.247716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:16.645 [2024-12-09 23:31:57.247725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.345 ms 00:50:16.645 [2024-12-09 23:31:57.247733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.277169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.277209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:16.907 [2024-12-09 23:31:57.277220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.388 ms 00:50:16.907 [2024-12-09 23:31:57.277228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.289342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.289499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:16.907 [2024-12-09 23:31:57.289517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.065 ms 00:50:16.907 [2024-12-09 23:31:57.289526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.301428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.301465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:16.907 [2024-12-09 23:31:57.301475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.868 ms 00:50:16.907 [2024-12-09 23:31:57.301483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.302175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.302205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:16.907 [2024-12-09 23:31:57.302216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:50:16.907 [2024-12-09 23:31:57.302224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.365493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.365564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:16.907 [2024-12-09 23:31:57.365581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.248 ms 00:50:16.907 [2024-12-09 23:31:57.365592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.377342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:16.907 [2024-12-09 23:31:57.380911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.381161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:16.907 [2024-12-09 23:31:57.381183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.238 ms 00:50:16.907 [2024-12-09 23:31:57.381199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.381298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.381310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:16.907 [2024-12-09 23:31:57.381321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:50:16.907 [2024-12-09 23:31:57.381329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.381400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.381411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:16.907 [2024-12-09 23:31:57.381421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:50:16.907 [2024-12-09 23:31:57.381429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.381454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.381463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:16.907 [2024-12-09 23:31:57.381473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:16.907 [2024-12-09 23:31:57.381481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.381518] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:16.907 [2024-12-09 23:31:57.381528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.381536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:16.907 [2024-12-09 23:31:57.381545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:16.907 [2024-12-09 23:31:57.381557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.407383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.407430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:16.907 [2024-12-09 23:31:57.407443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:50:16.907 [2024-12-09 23:31:57.407452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.407539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.907 [2024-12-09 23:31:57.407549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:16.907 [2024-12-09 23:31:57.407559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:50:16.907 [2024-12-09 23:31:57.407568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.907 [2024-12-09 23:31:57.408869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.212 ms, result 0 00:50:17.849  [2024-12-09T23:31:59.428Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-09T23:32:00.815Z] Copying: 28/1024 [MB] (14 MBps) [2024-12-09T23:32:01.761Z] Copying: 44/1024 [MB] (16 MBps) [2024-12-09T23:32:02.707Z] Copying: 62/1024 [MB] (17 MBps) [2024-12-09T23:32:03.704Z] Copying: 96/1024 [MB] (34 MBps) [2024-12-09T23:32:04.650Z] Copying: 143/1024 [MB] (46 MBps) [2024-12-09T23:32:05.606Z] Copying: 164/1024 [MB] (21 MBps) [2024-12-09T23:32:06.554Z] Copying: 193/1024 [MB] (29 MBps) [2024-12-09T23:32:07.499Z] Copying: 227/1024 [MB] (33 MBps) [2024-12-09T23:32:08.440Z] Copying: 260/1024 [MB] (33 MBps) [2024-12-09T23:32:09.826Z] Copying: 280/1024 [MB] (19 MBps) [2024-12-09T23:32:10.768Z] Copying: 302/1024 [MB] (22 MBps) [2024-12-09T23:32:11.710Z] Copying: 322/1024 [MB] (19 MBps) [2024-12-09T23:32:12.653Z] Copying: 341/1024 [MB] (18 MBps) [2024-12-09T23:32:13.594Z] Copying: 353/1024 [MB] (12 MBps) [2024-12-09T23:32:14.532Z] Copying: 369/1024 [MB] (15 MBps) [2024-12-09T23:32:15.472Z] Copying: 384/1024 [MB] (15 MBps) [2024-12-09T23:32:16.855Z] Copying: 406/1024 [MB] (21 MBps) [2024-12-09T23:32:17.426Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-09T23:32:18.809Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-09T23:32:19.750Z] Copying: 437/1024 [MB] (10 MBps) [2024-12-09T23:32:20.694Z] Copying: 457752/1048576 [kB] (9808 kBps) [2024-12-09T23:32:21.639Z] Copying: 467928/1048576 [kB] (10176 kBps) [2024-12-09T23:32:22.585Z] Copying: 477476/1048576 [kB] (9548 kBps) [2024-12-09T23:32:23.531Z] Copying: 486884/1048576 [kB] (9408 kBps) [2024-12-09T23:32:24.477Z] Copying: 496696/1048576 [kB] (9812 kBps) [2024-12-09T23:32:25.862Z] Copying: 506420/1048576 [kB] (9724 kBps) [2024-12-09T23:32:26.434Z] Copying: 510/1024 [MB] (15 MBps) [2024-12-09T23:32:27.823Z] Copying: 520/1024 [MB] (10 MBps) [2024-12-09T23:32:28.769Z] Copying: 532/1024 [MB] (12 MBps) [2024-12-09T23:32:29.712Z] Copying: 543/1024 [MB] (10 MBps) [2024-12-09T23:32:30.655Z] Copying: 557/1024 [MB] (13 MBps) [2024-12-09T23:32:31.599Z] Copying: 604/1024 [MB] (47 MBps) [2024-12-09T23:32:32.542Z] Copying: 630/1024 [MB] (25 MBps) [2024-12-09T23:32:33.485Z] Copying: 647/1024 [MB] (17 MBps) [2024-12-09T23:32:34.432Z] Copying: 663/1024 [MB] (15 MBps) [2024-12-09T23:32:35.819Z] Copying: 676/1024 [MB] (13 MBps) [2024-12-09T23:32:36.764Z] Copying: 697/1024 [MB] (20 MBps) [2024-12-09T23:32:37.706Z] Copying: 717/1024 [MB] (19 MBps) [2024-12-09T23:32:38.649Z] Copying: 729/1024 [MB] (12 MBps) [2024-12-09T23:32:39.595Z] Copying: 740/1024 [MB] (11 MBps) [2024-12-09T23:32:40.539Z] Copying: 751/1024 [MB] (10 MBps) [2024-12-09T23:32:41.484Z] Copying: 764/1024 [MB] (12 MBps) [2024-12-09T23:32:42.428Z] Copying: 779/1024 [MB] (15 MBps) [2024-12-09T23:32:43.816Z] Copying: 793/1024 [MB] (14 MBps) [2024-12-09T23:32:44.761Z] Copying: 822820/1048576 [kB] (9840 kBps) [2024-12-09T23:32:45.819Z] Copying: 814/1024 [MB] (10 MBps) [2024-12-09T23:32:46.760Z] Copying: 828/1024 [MB] (13 MBps) [2024-12-09T23:32:47.704Z] Copying: 839/1024 [MB] (11 MBps) [2024-12-09T23:32:48.648Z] Copying: 850/1024 [MB] (10 MBps) [2024-12-09T23:32:49.590Z] Copying: 864/1024 [MB] (13 MBps) [2024-12-09T23:32:50.532Z] Copying: 875/1024 [MB] (10 MBps) [2024-12-09T23:32:51.510Z] Copying: 885/1024 [MB] (10 MBps) [2024-12-09T23:32:52.451Z] Copying: 916784/1048576 [kB] (10088 kBps) [2024-12-09T23:32:53.838Z] Copying: 926924/1048576 [kB] (10140 kBps) [2024-12-09T23:32:54.782Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-09T23:32:55.721Z] Copying: 947432/1048576 [kB] (10216 kBps) [2024-12-09T23:32:56.664Z] Copying: 957400/1048576 [kB] (9968 kBps) [2024-12-09T23:32:57.607Z] Copying: 967136/1048576 [kB] (9736 kBps) [2024-12-09T23:32:58.551Z] Copying: 954/1024 [MB] (10 MBps) [2024-12-09T23:32:59.494Z] Copying: 965/1024 [MB] (10 MBps) [2024-12-09T23:33:00.438Z] Copying: 976/1024 [MB] (11 MBps) [2024-12-09T23:33:01.849Z] Copying: 986/1024 [MB] (10 MBps) [2024-12-09T23:33:02.793Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-09T23:33:02.793Z] Copying: 1022/1024 [MB] (25 MBps) [2024-12-09T23:33:02.793Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 23:33:02.560543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.560590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:22.157 [2024-12-09 23:33:02.560604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:51:22.157 [2024-12-09 23:33:02.560612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.560631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:22.157 [2024-12-09 23:33:02.563304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.563330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:22.157 [2024-12-09 23:33:02.563340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:51:22.157 [2024-12-09 23:33:02.563349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.565184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.565213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:22.157 [2024-12-09 23:33:02.565224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:51:22.157 [2024-12-09 23:33:02.565231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.581481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.581512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:22.157 [2024-12-09 23:33:02.581522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.236 ms 00:51:22.157 [2024-12-09 23:33:02.581530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.587630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.587662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:22.157 [2024-12-09 23:33:02.587671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.076 ms 00:51:22.157 [2024-12-09 23:33:02.587678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.612271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.612306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:22.157 [2024-12-09 23:33:02.612317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.548 ms 00:51:22.157 [2024-12-09 23:33:02.612325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.626483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.626635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:22.157 [2024-12-09 23:33:02.626655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.125 ms 00:51:22.157 [2024-12-09 23:33:02.626663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.629608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.629643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:22.157 [2024-12-09 23:33:02.629658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:51:22.157 [2024-12-09 23:33:02.629666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.653883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.654038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:22.157 [2024-12-09 23:33:02.654400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.203 ms 00:51:22.157 [2024-12-09 23:33:02.654432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.678747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.678793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:22.157 [2024-12-09 23:33:02.678805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.265 ms 00:51:22.157 [2024-12-09 23:33:02.678813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.702804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.702846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:22.157 [2024-12-09 23:33:02.702858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.950 ms 00:51:22.157 [2024-12-09 23:33:02.702866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.157 [2024-12-09 23:33:02.727407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.157 [2024-12-09 23:33:02.727449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:22.157 [2024-12-09 23:33:02.727460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:51:22.157 [2024-12-09 23:33:02.727468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.158 [2024-12-09 23:33:02.727509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:22.158 [2024-12-09 23:33:02.727525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:51:22.158 [2024-12-09 23:33:02.727536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.727975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:22.158 [2024-12-09 23:33:02.728201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:22.159 [2024-12-09 23:33:02.728354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:22.159 [2024-12-09 23:33:02.728363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e 00:51:22.159 [2024-12-09 23:33:02.728381] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:51:22.159 [2024-12-09 23:33:02.728391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:51:22.159 [2024-12-09 23:33:02.728399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:51:22.159 [2024-12-09 23:33:02.728408] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:51:22.159 [2024-12-09 23:33:02.728416] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:22.159 [2024-12-09 23:33:02.728424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:22.159 [2024-12-09 23:33:02.728434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:22.159 [2024-12-09 23:33:02.728440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:22.159 [2024-12-09 23:33:02.728447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:22.159 [2024-12-09 23:33:02.728456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.159 [2024-12-09 23:33:02.728465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:22.159 [2024-12-09 23:33:02.728473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:51:22.159 [2024-12-09 23:33:02.728482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.742017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.159 [2024-12-09 23:33:02.742053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:22.159 [2024-12-09 23:33:02.742065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.515 ms 00:51:22.159 [2024-12-09 23:33:02.742073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.742492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:22.159 [2024-12-09 23:33:02.742509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:22.159 [2024-12-09 23:33:02.742519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:51:22.159 [2024-12-09 23:33:02.742534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.778847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.159 [2024-12-09 23:33:02.778896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:22.159 [2024-12-09 23:33:02.778908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.159 [2024-12-09 23:33:02.778918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.779011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.159 [2024-12-09 23:33:02.779022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:22.159 [2024-12-09 23:33:02.779032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.159 [2024-12-09 23:33:02.779047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.779109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.159 [2024-12-09 23:33:02.779121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:22.159 [2024-12-09 23:33:02.779131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.159 [2024-12-09 23:33:02.779140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.159 [2024-12-09 23:33:02.779157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.159 [2024-12-09 23:33:02.779167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:22.159 [2024-12-09 23:33:02.779174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.159 [2024-12-09 23:33:02.779182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.862575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.862636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:22.421 [2024-12-09 23:33:02.862650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.862659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:22.421 [2024-12-09 23:33:02.931341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:22.421 [2024-12-09 23:33:02.931461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:22.421 [2024-12-09 23:33:02.931524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:22.421 [2024-12-09 23:33:02.931653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:22.421 [2024-12-09 23:33:02.931712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:22.421 [2024-12-09 23:33:02.931785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.931841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:22.421 [2024-12-09 23:33:02.931852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:22.421 [2024-12-09 23:33:02.931860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:22.421 [2024-12-09 23:33:02.931868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:22.421 [2024-12-09 23:33:02.932036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.418 ms, result 0 00:51:23.365 00:51:23.365 00:51:23.365 23:33:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:51:25.948 23:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:25.948 [2024-12-09 23:33:06.171814] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:51:25.948 [2024-12-09 23:33:06.171968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81973 ] 00:51:25.948 [2024-12-09 23:33:06.337191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:25.948 [2024-12-09 23:33:06.467564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:26.209 [2024-12-09 23:33:06.769412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:26.209 [2024-12-09 23:33:06.769505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:26.472 [2024-12-09 23:33:06.932148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.472 [2024-12-09 23:33:06.932220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:51:26.472 [2024-12-09 23:33:06.932235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:51:26.472 [2024-12-09 23:33:06.932244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.472 [2024-12-09 23:33:06.932304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.472 [2024-12-09 23:33:06.932318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:26.472 [2024-12-09 23:33:06.932328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:51:26.472 [2024-12-09 23:33:06.932336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.472 [2024-12-09 23:33:06.932358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:51:26.473 [2024-12-09 23:33:06.933072] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:51:26.473 [2024-12-09 23:33:06.933106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.933116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:26.473 [2024-12-09 23:33:06.933125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:51:26.473 [2024-12-09 23:33:06.933133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.935010] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:51:26.473 [2024-12-09 23:33:06.948913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.949150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:51:26.473 [2024-12-09 23:33:06.949176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.905 ms 00:51:26.473 [2024-12-09 23:33:06.949185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.949377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.949406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:51:26.473 [2024-12-09 23:33:06.949418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:51:26.473 [2024-12-09 23:33:06.949427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.958302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.958347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:26.473 [2024-12-09 23:33:06.958359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:51:26.473 [2024-12-09 23:33:06.958373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.958458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.958468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:26.473 [2024-12-09 23:33:06.958478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:51:26.473 [2024-12-09 23:33:06.958487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.958535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.958546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:51:26.473 [2024-12-09 23:33:06.958555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:51:26.473 [2024-12-09 23:33:06.958563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.958591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:51:26.473 [2024-12-09 23:33:06.962801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.962844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:26.473 [2024-12-09 23:33:06.962859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.217 ms 00:51:26.473 [2024-12-09 23:33:06.962867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.962909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.962919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:51:26.473 [2024-12-09 23:33:06.962933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:26.473 [2024-12-09 23:33:06.962942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.963027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:51:26.473 [2024-12-09 23:33:06.963054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:51:26.473 [2024-12-09 23:33:06.963092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:51:26.473 [2024-12-09 23:33:06.963112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:51:26.473 [2024-12-09 23:33:06.963221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:51:26.473 [2024-12-09 23:33:06.963232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:51:26.473 [2024-12-09 23:33:06.963244] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:51:26.473 [2024-12-09 23:33:06.963254] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963264] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:51:26.473 [2024-12-09 23:33:06.963281] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:51:26.473 [2024-12-09 23:33:06.963292] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:51:26.473 [2024-12-09 23:33:06.963300] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:51:26.473 [2024-12-09 23:33:06.963309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.963318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:51:26.473 [2024-12-09 23:33:06.963327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:51:26.473 [2024-12-09 23:33:06.963335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.963422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.473 [2024-12-09 23:33:06.963431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:51:26.473 [2024-12-09 23:33:06.963440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:51:26.473 [2024-12-09 23:33:06.963448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.473 [2024-12-09 23:33:06.963554] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:51:26.473 [2024-12-09 23:33:06.963565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:51:26.473 [2024-12-09 23:33:06.963573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:51:26.473 [2024-12-09 23:33:06.963597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:51:26.473 [2024-12-09 23:33:06.963618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:26.473 [2024-12-09 23:33:06.963633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:51:26.473 [2024-12-09 23:33:06.963640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:51:26.473 [2024-12-09 23:33:06.963646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:26.473 [2024-12-09 23:33:06.963660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:51:26.473 [2024-12-09 23:33:06.963669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:51:26.473 [2024-12-09 23:33:06.963677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:51:26.473 [2024-12-09 23:33:06.963691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:51:26.473 [2024-12-09 23:33:06.963712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:51:26.473 [2024-12-09 23:33:06.963733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:51:26.473 [2024-12-09 23:33:06.963752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:51:26.473 [2024-12-09 23:33:06.963772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:51:26.473 [2024-12-09 23:33:06.963792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:26.473 [2024-12-09 23:33:06.963807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:51:26.473 [2024-12-09 23:33:06.963814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:51:26.473 [2024-12-09 23:33:06.963821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:26.473 [2024-12-09 23:33:06.963830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:51:26.473 [2024-12-09 23:33:06.963837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:51:26.473 [2024-12-09 23:33:06.963844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:51:26.473 [2024-12-09 23:33:06.963865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:51:26.473 [2024-12-09 23:33:06.963871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963879] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:51:26.473 [2024-12-09 23:33:06.963887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:51:26.473 [2024-12-09 23:33:06.963895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:26.473 [2024-12-09 23:33:06.963905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:26.473 [2024-12-09 23:33:06.963914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:51:26.473 [2024-12-09 23:33:06.963922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:51:26.473 [2024-12-09 23:33:06.963929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:51:26.473 [2024-12-09 23:33:06.963937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:51:26.474 [2024-12-09 23:33:06.963945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:51:26.474 [2024-12-09 23:33:06.963953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:51:26.474 [2024-12-09 23:33:06.963963] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:51:26.474 [2024-12-09 23:33:06.963972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:51:26.474 [2024-12-09 23:33:06.964019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:51:26.474 [2024-12-09 23:33:06.964027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:51:26.474 [2024-12-09 23:33:06.964035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:51:26.474 [2024-12-09 23:33:06.964044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:51:26.474 [2024-12-09 23:33:06.964051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:51:26.474 [2024-12-09 23:33:06.964058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:51:26.474 [2024-12-09 23:33:06.964066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:51:26.474 [2024-12-09 23:33:06.964074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:51:26.474 [2024-12-09 23:33:06.964081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:51:26.474 [2024-12-09 23:33:06.964119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:51:26.474 [2024-12-09 23:33:06.964129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:26.474 [2024-12-09 23:33:06.964145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:51:26.474 [2024-12-09 23:33:06.964152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:51:26.474 [2024-12-09 23:33:06.964160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:51:26.474 [2024-12-09 23:33:06.964170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:06.964179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:51:26.474 [2024-12-09 23:33:06.964188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:51:26.474 [2024-12-09 23:33:06.964198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:06.997663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:06.997729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:26.474 [2024-12-09 23:33:06.997744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.417 ms 00:51:26.474 [2024-12-09 23:33:06.997759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:06.997859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:06.997869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:26.474 [2024-12-09 23:33:06.997880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:51:26.474 [2024-12-09 23:33:06.997889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.048877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.048933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:26.474 [2024-12-09 23:33:07.048947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.916 ms 00:51:26.474 [2024-12-09 23:33:07.048957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.049034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.049046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:26.474 [2024-12-09 23:33:07.049060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:26.474 [2024-12-09 23:33:07.049069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.049734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.049767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:26.474 [2024-12-09 23:33:07.049778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:51:26.474 [2024-12-09 23:33:07.049787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.049956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.049967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:26.474 [2024-12-09 23:33:07.050000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:51:26.474 [2024-12-09 23:33:07.050010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.066021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.066070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:26.474 [2024-12-09 23:33:07.066082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.990 ms 00:51:26.474 [2024-12-09 23:33:07.066090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.474 [2024-12-09 23:33:07.080695] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:51:26.474 [2024-12-09 23:33:07.080748] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:51:26.474 [2024-12-09 23:33:07.080762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.474 [2024-12-09 23:33:07.080773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:51:26.474 [2024-12-09 23:33:07.080782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.552 ms 00:51:26.474 [2024-12-09 23:33:07.080791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.106759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.106814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:51:26.735 [2024-12-09 23:33:07.106827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.908 ms 00:51:26.735 [2024-12-09 23:33:07.106835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.119713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.119761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:51:26.735 [2024-12-09 23:33:07.119773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.817 ms 00:51:26.735 [2024-12-09 23:33:07.119781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.132464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.132511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:51:26.735 [2024-12-09 23:33:07.132523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.632 ms 00:51:26.735 [2024-12-09 23:33:07.132531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.133276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.133313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:26.735 [2024-12-09 23:33:07.133329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:51:26.735 [2024-12-09 23:33:07.133337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.199724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.199799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:51:26.735 [2024-12-09 23:33:07.199824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.362 ms 00:51:26.735 [2024-12-09 23:33:07.199834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.211475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:26.735 [2024-12-09 23:33:07.214940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.215161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:26.735 [2024-12-09 23:33:07.215183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.038 ms 00:51:26.735 [2024-12-09 23:33:07.215194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.215296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.215308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:51:26.735 [2024-12-09 23:33:07.215322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:51:26.735 [2024-12-09 23:33:07.215331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.216193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.216244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:26.735 [2024-12-09 23:33:07.216256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:51:26.735 [2024-12-09 23:33:07.216266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.735 [2024-12-09 23:33:07.216302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.735 [2024-12-09 23:33:07.216313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:26.735 [2024-12-09 23:33:07.216322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:26.736 [2024-12-09 23:33:07.216332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.736 [2024-12-09 23:33:07.216380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:51:26.736 [2024-12-09 23:33:07.216393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.736 [2024-12-09 23:33:07.216403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:51:26.736 [2024-12-09 23:33:07.216413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:26.736 [2024-12-09 23:33:07.216423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.736 [2024-12-09 23:33:07.243135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.736 [2024-12-09 23:33:07.243187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:26.736 [2024-12-09 23:33:07.243208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.690 ms 00:51:26.736 [2024-12-09 23:33:07.243217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.736 [2024-12-09 23:33:07.243305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:26.736 [2024-12-09 23:33:07.243316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:26.736 [2024-12-09 23:33:07.243325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:51:26.736 [2024-12-09 23:33:07.243334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:26.736 [2024-12-09 23:33:07.244870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.213 ms, result 0 00:51:28.122  [2024-12-09T23:33:09.703Z] Copying: 968/1048576 [kB] (968 kBps) [2024-12-09T23:33:10.646Z] Copying: 2020/1048576 [kB] (1052 kBps) [2024-12-09T23:33:11.632Z] Copying: 4904/1048576 [kB] (2884 kBps) [2024-12-09T23:33:12.575Z] Copying: 21/1024 [MB] (16 MBps) [2024-12-09T23:33:13.519Z] Copying: 40/1024 [MB] (19 MBps) [2024-12-09T23:33:14.463Z] Copying: 55/1024 [MB] (15 MBps) [2024-12-09T23:33:15.849Z] Copying: 72/1024 [MB] (16 MBps) [2024-12-09T23:33:16.794Z] Copying: 99/1024 [MB] (27 MBps) [2024-12-09T23:33:17.738Z] Copying: 116/1024 [MB] (17 MBps) [2024-12-09T23:33:18.680Z] Copying: 131/1024 [MB] (14 MBps) [2024-12-09T23:33:19.622Z] Copying: 165/1024 [MB] (34 MBps) [2024-12-09T23:33:20.565Z] Copying: 184/1024 [MB] (18 MBps) [2024-12-09T23:33:21.510Z] Copying: 209/1024 [MB] (24 MBps) [2024-12-09T23:33:22.454Z] Copying: 238/1024 [MB] (28 MBps) [2024-12-09T23:33:23.430Z] Copying: 256/1024 [MB] (18 MBps) [2024-12-09T23:33:24.816Z] Copying: 288/1024 [MB] (31 MBps) [2024-12-09T23:33:25.753Z] Copying: 327/1024 [MB] (39 MBps) [2024-12-09T23:33:26.688Z] Copying: 371/1024 [MB] (43 MBps) [2024-12-09T23:33:27.622Z] Copying: 422/1024 [MB] (50 MBps) [2024-12-09T23:33:28.562Z] Copying: 471/1024 [MB] (49 MBps) [2024-12-09T23:33:29.504Z] Copying: 518/1024 [MB] (47 MBps) [2024-12-09T23:33:30.443Z] Copying: 563/1024 [MB] (44 MBps) [2024-12-09T23:33:31.823Z] Copying: 607/1024 [MB] (43 MBps) [2024-12-09T23:33:32.762Z] Copying: 654/1024 [MB] (47 MBps) [2024-12-09T23:33:33.697Z] Copying: 703/1024 [MB] (48 MBps) [2024-12-09T23:33:34.633Z] Copying: 750/1024 [MB] (47 MBps) [2024-12-09T23:33:35.620Z] Copying: 799/1024 [MB] (48 MBps) [2024-12-09T23:33:36.553Z] Copying: 851/1024 [MB] (52 MBps) [2024-12-09T23:33:37.487Z] Copying: 904/1024 [MB] (52 MBps) [2024-12-09T23:33:38.861Z] Copying: 957/1024 [MB] (53 MBps) [2024-12-09T23:33:38.861Z] Copying: 1009/1024 [MB] (52 MBps) [2024-12-09T23:33:39.429Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-12-09 23:33:39.129170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.129235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:58.793 [2024-12-09 23:33:39.129249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:51:58.793 [2024-12-09 23:33:39.129257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.129279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:58.793 [2024-12-09 23:33:39.132177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.132209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:58.793 [2024-12-09 23:33:39.132220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:51:58.793 [2024-12-09 23:33:39.132229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.132448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.132463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:58.793 [2024-12-09 23:33:39.132471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:51:58.793 [2024-12-09 23:33:39.132479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.148700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.148762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:58.793 [2024-12-09 23:33:39.148776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.203 ms 00:51:58.793 [2024-12-09 23:33:39.148783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.155101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.155144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:58.793 [2024-12-09 23:33:39.155164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.288 ms 00:51:58.793 [2024-12-09 23:33:39.155172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.183753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.183805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:58.793 [2024-12-09 23:33:39.183818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.520 ms 00:51:58.793 [2024-12-09 23:33:39.183826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.197962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.198026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:58.793 [2024-12-09 23:33:39.198040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:51:58.793 [2024-12-09 23:33:39.198047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.200569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.200602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:58.793 [2024-12-09 23:33:39.200613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:51:58.793 [2024-12-09 23:33:39.200626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.223957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.224009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:58.793 [2024-12-09 23:33:39.224021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.315 ms 00:51:58.793 [2024-12-09 23:33:39.224028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.246885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.246929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:58.793 [2024-12-09 23:33:39.246941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.815 ms 00:51:58.793 [2024-12-09 23:33:39.246949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.269490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.269699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:58.793 [2024-12-09 23:33:39.269718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.476 ms 00:51:58.793 [2024-12-09 23:33:39.269727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.292364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.793 [2024-12-09 23:33:39.292569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:58.793 [2024-12-09 23:33:39.292587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.569 ms 00:51:58.793 [2024-12-09 23:33:39.292594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.793 [2024-12-09 23:33:39.292631] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:58.793 [2024-12-09 23:33:39.292646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:58.793 [2024-12-09 23:33:39.292656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:51:58.793 [2024-12-09 23:33:39.292664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:58.793 [2024-12-09 23:33:39.292672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:58.793 [2024-12-09 23:33:39.292680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.292979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:58.794 [2024-12-09 23:33:39.293327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:58.795 [2024-12-09 23:33:39.293437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:58.795 [2024-12-09 23:33:39.293445] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e 00:51:58.795 [2024-12-09 23:33:39.293453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:51:58.795 [2024-12-09 23:33:39.293460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:51:58.795 [2024-12-09 23:33:39.293470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:51:58.795 [2024-12-09 23:33:39.293478] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:51:58.795 [2024-12-09 23:33:39.293484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:58.795 [2024-12-09 23:33:39.293499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:58.795 [2024-12-09 23:33:39.293507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:58.795 [2024-12-09 23:33:39.293513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:58.795 [2024-12-09 23:33:39.293520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:58.795 [2024-12-09 23:33:39.293527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.795 [2024-12-09 23:33:39.293535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:58.795 [2024-12-09 23:33:39.293543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:51:58.795 [2024-12-09 23:33:39.293550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.305745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.795 [2024-12-09 23:33:39.305786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:58.795 [2024-12-09 23:33:39.305798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.176 ms 00:51:58.795 [2024-12-09 23:33:39.305806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.306177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.795 [2024-12-09 23:33:39.306187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:58.795 [2024-12-09 23:33:39.306195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:51:58.795 [2024-12-09 23:33:39.306202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.338388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.795 [2024-12-09 23:33:39.338439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:58.795 [2024-12-09 23:33:39.338450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.795 [2024-12-09 23:33:39.338457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.338521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.795 [2024-12-09 23:33:39.338529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:58.795 [2024-12-09 23:33:39.338537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.795 [2024-12-09 23:33:39.338544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.338626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.795 [2024-12-09 23:33:39.338636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:58.795 [2024-12-09 23:33:39.338644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.795 [2024-12-09 23:33:39.338651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.338666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.795 [2024-12-09 23:33:39.338674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:58.795 [2024-12-09 23:33:39.338682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.795 [2024-12-09 23:33:39.338689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.795 [2024-12-09 23:33:39.414744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.795 [2024-12-09 23:33:39.414958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:58.795 [2024-12-09 23:33:39.414976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.795 [2024-12-09 23:33:39.415010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:59.054 [2024-12-09 23:33:39.491148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:59.054 [2024-12-09 23:33:39.491278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:59.054 [2024-12-09 23:33:39.491390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:59.054 [2024-12-09 23:33:39.491572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:59.054 [2024-12-09 23:33:39.491656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:59.054 [2024-12-09 23:33:39.491748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.491813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:59.054 [2024-12-09 23:33:39.491829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:59.054 [2024-12-09 23:33:39.491842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:59.054 [2024-12-09 23:33:39.491855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:59.054 [2024-12-09 23:33:39.492034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.800 ms, result 0 00:51:59.621 00:51:59.621 00:51:59.621 23:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:52:02.155 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:52:02.155 23:33:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:52:02.155 [2024-12-09 23:33:42.294138] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:52:02.155 [2024-12-09 23:33:42.294256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82343 ] 00:52:02.155 [2024-12-09 23:33:42.455708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:02.155 [2024-12-09 23:33:42.552193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:02.414 [2024-12-09 23:33:42.809327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:02.414 [2024-12-09 23:33:42.809400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:02.414 [2024-12-09 23:33:42.963133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.963191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:52:02.414 [2024-12-09 23:33:42.963204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:52:02.414 [2024-12-09 23:33:42.963212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.963260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.963272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:02.414 [2024-12-09 23:33:42.963281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:52:02.414 [2024-12-09 23:33:42.963289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.963307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:52:02.414 [2024-12-09 23:33:42.963970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:52:02.414 [2024-12-09 23:33:42.964004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.964013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:02.414 [2024-12-09 23:33:42.964022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:52:02.414 [2024-12-09 23:33:42.964029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.965105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:52:02.414 [2024-12-09 23:33:42.977246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.977279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:52:02.414 [2024-12-09 23:33:42.977291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:52:02.414 [2024-12-09 23:33:42.977298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.977355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.977364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:52:02.414 [2024-12-09 23:33:42.977372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:52:02.414 [2024-12-09 23:33:42.977380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.982135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.982167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:02.414 [2024-12-09 23:33:42.982177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:52:02.414 [2024-12-09 23:33:42.982188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.982261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.982270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:02.414 [2024-12-09 23:33:42.982278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:52:02.414 [2024-12-09 23:33:42.982285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.982324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.982332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:52:02.414 [2024-12-09 23:33:42.982340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:52:02.414 [2024-12-09 23:33:42.982347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.982371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:52:02.414 [2024-12-09 23:33:42.985513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.985538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:02.414 [2024-12-09 23:33:42.985550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:52:02.414 [2024-12-09 23:33:42.985557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.414 [2024-12-09 23:33:42.985586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.414 [2024-12-09 23:33:42.985594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:52:02.414 [2024-12-09 23:33:42.985602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:52:02.415 [2024-12-09 23:33:42.985609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.415 [2024-12-09 23:33:42.985629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:52:02.415 [2024-12-09 23:33:42.985648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:52:02.415 [2024-12-09 23:33:42.985682] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:52:02.415 [2024-12-09 23:33:42.985707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:52:02.415 [2024-12-09 23:33:42.985808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:52:02.415 [2024-12-09 23:33:42.985818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:52:02.415 [2024-12-09 23:33:42.985829] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:52:02.415 [2024-12-09 23:33:42.985839] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:52:02.415 [2024-12-09 23:33:42.985847] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:52:02.415 [2024-12-09 23:33:42.985856] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:52:02.415 [2024-12-09 23:33:42.985863] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:52:02.415 [2024-12-09 23:33:42.985873] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:52:02.415 [2024-12-09 23:33:42.985880] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:52:02.415 [2024-12-09 23:33:42.985887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.415 [2024-12-09 23:33:42.985895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:52:02.415 [2024-12-09 23:33:42.985902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:52:02.415 [2024-12-09 23:33:42.985909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.415 [2024-12-09 23:33:42.986009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.415 [2024-12-09 23:33:42.986018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:52:02.415 [2024-12-09 23:33:42.986025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:52:02.415 [2024-12-09 23:33:42.986032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.415 [2024-12-09 23:33:42.986135] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:52:02.415 [2024-12-09 23:33:42.986144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:52:02.415 [2024-12-09 23:33:42.986152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:52:02.415 [2024-12-09 23:33:42.986173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:52:02.415 [2024-12-09 23:33:42.986193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:02.415 [2024-12-09 23:33:42.986206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:52:02.415 [2024-12-09 23:33:42.986212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:52:02.415 [2024-12-09 23:33:42.986218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:02.415 [2024-12-09 23:33:42.986231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:52:02.415 [2024-12-09 23:33:42.986239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:52:02.415 [2024-12-09 23:33:42.986246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:52:02.415 [2024-12-09 23:33:42.986259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:52:02.415 [2024-12-09 23:33:42.986278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:52:02.415 [2024-12-09 23:33:42.986297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:52:02.415 [2024-12-09 23:33:42.986316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:52:02.415 [2024-12-09 23:33:42.986335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:52:02.415 [2024-12-09 23:33:42.986354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:02.415 [2024-12-09 23:33:42.986366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:52:02.415 [2024-12-09 23:33:42.986372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:52:02.415 [2024-12-09 23:33:42.986378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:02.415 [2024-12-09 23:33:42.986384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:52:02.415 [2024-12-09 23:33:42.986391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:52:02.415 [2024-12-09 23:33:42.986397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:52:02.415 [2024-12-09 23:33:42.986410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:52:02.415 [2024-12-09 23:33:42.986416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986422] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:52:02.415 [2024-12-09 23:33:42.986430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:52:02.415 [2024-12-09 23:33:42.986436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:02.415 [2024-12-09 23:33:42.986451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:52:02.415 [2024-12-09 23:33:42.986458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:52:02.415 [2024-12-09 23:33:42.986464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:52:02.415 [2024-12-09 23:33:42.986471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:52:02.415 [2024-12-09 23:33:42.986477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:52:02.415 [2024-12-09 23:33:42.986483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:52:02.415 [2024-12-09 23:33:42.986491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:52:02.415 [2024-12-09 23:33:42.986500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:02.415 [2024-12-09 23:33:42.986510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:52:02.415 [2024-12-09 23:33:42.986524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:52:02.415 [2024-12-09 23:33:42.986531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:52:02.415 [2024-12-09 23:33:42.986538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:52:02.415 [2024-12-09 23:33:42.986544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:52:02.415 [2024-12-09 23:33:42.986551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:52:02.415 [2024-12-09 23:33:42.986558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:52:02.415 [2024-12-09 23:33:42.986565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:52:02.415 [2024-12-09 23:33:42.986571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:52:02.416 [2024-12-09 23:33:42.986578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:52:02.416 [2024-12-09 23:33:42.986612] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:52:02.416 [2024-12-09 23:33:42.986619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:02.416 [2024-12-09 23:33:42.986634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:52:02.416 [2024-12-09 23:33:42.986641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:52:02.416 [2024-12-09 23:33:42.986648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:52:02.416 [2024-12-09 23:33:42.986655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.416 [2024-12-09 23:33:42.986663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:52:02.416 [2024-12-09 23:33:42.986670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:52:02.416 [2024-12-09 23:33:42.986678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.416 [2024-12-09 23:33:43.012032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.416 [2024-12-09 23:33:43.012193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:02.416 [2024-12-09 23:33:43.012208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.299 ms 00:52:02.416 [2024-12-09 23:33:43.012220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.416 [2024-12-09 23:33:43.012307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.416 [2024-12-09 23:33:43.012316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:52:02.416 [2024-12-09 23:33:43.012324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:52:02.416 [2024-12-09 23:33:43.012331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.675 [2024-12-09 23:33:43.059164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.675 [2024-12-09 23:33:43.059220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:02.675 [2024-12-09 23:33:43.059233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.777 ms 00:52:02.675 [2024-12-09 23:33:43.059241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.675 [2024-12-09 23:33:43.059291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.675 [2024-12-09 23:33:43.059301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:02.675 [2024-12-09 23:33:43.059313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:52:02.675 [2024-12-09 23:33:43.059320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.675 [2024-12-09 23:33:43.059676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.675 [2024-12-09 23:33:43.059692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:02.675 [2024-12-09 23:33:43.059701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:52:02.675 [2024-12-09 23:33:43.059708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.675 [2024-12-09 23:33:43.059834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.675 [2024-12-09 23:33:43.059843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:02.675 [2024-12-09 23:33:43.059856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:52:02.676 [2024-12-09 23:33:43.059863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.072567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.072600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:02.676 [2024-12-09 23:33:43.072610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:52:02.676 [2024-12-09 23:33:43.072617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.084836] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:52:02.676 [2024-12-09 23:33:43.084966] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:52:02.676 [2024-12-09 23:33:43.084996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.085004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:52:02.676 [2024-12-09 23:33:43.085014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:52:02.676 [2024-12-09 23:33:43.085021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.109436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.109481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:52:02.676 [2024-12-09 23:33:43.109494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.378 ms 00:52:02.676 [2024-12-09 23:33:43.109502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.121356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.121392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:52:02.676 [2024-12-09 23:33:43.121404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.783 ms 00:52:02.676 [2024-12-09 23:33:43.121412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.132686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.132717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:52:02.676 [2024-12-09 23:33:43.132728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.236 ms 00:52:02.676 [2024-12-09 23:33:43.132736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.133374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.133397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:52:02.676 [2024-12-09 23:33:43.133409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:52:02.676 [2024-12-09 23:33:43.133417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.187684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.187855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:52:02.676 [2024-12-09 23:33:43.187880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.248 ms 00:52:02.676 [2024-12-09 23:33:43.187888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.199387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:52:02.676 [2024-12-09 23:33:43.202055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.202169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:52:02.676 [2024-12-09 23:33:43.202184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:52:02.676 [2024-12-09 23:33:43.202193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.202301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.202312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:52:02.676 [2024-12-09 23:33:43.202322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:52:02.676 [2024-12-09 23:33:43.202330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.202877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.202901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:52:02.676 [2024-12-09 23:33:43.202910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:52:02.676 [2024-12-09 23:33:43.202917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.202939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.202947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:52:02.676 [2024-12-09 23:33:43.202954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:52:02.676 [2024-12-09 23:33:43.202962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.203007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:52:02.676 [2024-12-09 23:33:43.203018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.203025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:52:02.676 [2024-12-09 23:33:43.203032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:52:02.676 [2024-12-09 23:33:43.203040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.226296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.226329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:52:02.676 [2024-12-09 23:33:43.226343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.239 ms 00:52:02.676 [2024-12-09 23:33:43.226351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.226415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:02.676 [2024-12-09 23:33:43.226423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:52:02.676 [2024-12-09 23:33:43.226432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:52:02.676 [2024-12-09 23:33:43.226439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:02.676 [2024-12-09 23:33:43.227420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.855 ms, result 0 00:52:04.052  [2024-12-09T23:33:45.622Z] Copying: 49/1024 [MB] (49 MBps) [2024-12-09T23:33:46.557Z] Copying: 97/1024 [MB] (47 MBps) [2024-12-09T23:33:47.560Z] Copying: 143/1024 [MB] (46 MBps) [2024-12-09T23:33:48.494Z] Copying: 188/1024 [MB] (45 MBps) [2024-12-09T23:33:49.429Z] Copying: 236/1024 [MB] (47 MBps) [2024-12-09T23:33:50.805Z] Copying: 284/1024 [MB] (48 MBps) [2024-12-09T23:33:51.740Z] Copying: 332/1024 [MB] (48 MBps) [2024-12-09T23:33:52.691Z] Copying: 380/1024 [MB] (47 MBps) [2024-12-09T23:33:53.658Z] Copying: 427/1024 [MB] (46 MBps) [2024-12-09T23:33:54.594Z] Copying: 472/1024 [MB] (45 MBps) [2024-12-09T23:33:55.529Z] Copying: 511/1024 [MB] (39 MBps) [2024-12-09T23:33:56.464Z] Copying: 546/1024 [MB] (34 MBps) [2024-12-09T23:33:57.838Z] Copying: 576/1024 [MB] (30 MBps) [2024-12-09T23:33:58.405Z] Copying: 609/1024 [MB] (33 MBps) [2024-12-09T23:33:59.781Z] Copying: 645/1024 [MB] (36 MBps) [2024-12-09T23:34:00.716Z] Copying: 681/1024 [MB] (35 MBps) [2024-12-09T23:34:01.650Z] Copying: 717/1024 [MB] (35 MBps) [2024-12-09T23:34:02.585Z] Copying: 754/1024 [MB] (36 MBps) [2024-12-09T23:34:03.518Z] Copying: 798/1024 [MB] (44 MBps) [2024-12-09T23:34:04.452Z] Copying: 845/1024 [MB] (47 MBps) [2024-12-09T23:34:05.832Z] Copying: 884/1024 [MB] (38 MBps) [2024-12-09T23:34:06.426Z] Copying: 899/1024 [MB] (15 MBps) [2024-12-09T23:34:07.798Z] Copying: 917/1024 [MB] (17 MBps) [2024-12-09T23:34:08.733Z] Copying: 928/1024 [MB] (11 MBps) [2024-12-09T23:34:09.670Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-09T23:34:10.604Z] Copying: 951/1024 [MB] (11 MBps) [2024-12-09T23:34:11.542Z] Copying: 962/1024 [MB] (11 MBps) [2024-12-09T23:34:12.474Z] Copying: 973/1024 [MB] (10 MBps) [2024-12-09T23:34:13.408Z] Copying: 984/1024 [MB] (11 MBps) [2024-12-09T23:34:14.781Z] Copying: 996/1024 [MB] (12 MBps) [2024-12-09T23:34:15.745Z] Copying: 1008/1024 [MB] (11 MBps) [2024-12-09T23:34:16.003Z] Copying: 1020/1024 [MB] (11 MBps) [2024-12-09T23:34:16.003Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-12-09 23:34:15.856722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.856790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:52:35.367 [2024-12-09 23:34:15.856806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:52:35.367 [2024-12-09 23:34:15.856816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.856844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:52:35.367 [2024-12-09 23:34:15.860688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.860729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:52:35.367 [2024-12-09 23:34:15.860741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.825 ms 00:52:35.367 [2024-12-09 23:34:15.860751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.861041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.861054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:52:35.367 [2024-12-09 23:34:15.861065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:52:35.367 [2024-12-09 23:34:15.861075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.864890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.864910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:52:35.367 [2024-12-09 23:34:15.864920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.799 ms 00:52:35.367 [2024-12-09 23:34:15.864933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.871488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.871629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:52:35.367 [2024-12-09 23:34:15.871646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:52:35.367 [2024-12-09 23:34:15.871655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.896411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.896446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:52:35.367 [2024-12-09 23:34:15.896457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.694 ms 00:52:35.367 [2024-12-09 23:34:15.896464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.910268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.910400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:52:35.367 [2024-12-09 23:34:15.910416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.771 ms 00:52:35.367 [2024-12-09 23:34:15.910425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.915617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.915661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:52:35.367 [2024-12-09 23:34:15.915672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.895 ms 00:52:35.367 [2024-12-09 23:34:15.915681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.939477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.367 [2024-12-09 23:34:15.939512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:52:35.367 [2024-12-09 23:34:15.939523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.781 ms 00:52:35.367 [2024-12-09 23:34:15.939531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.367 [2024-12-09 23:34:15.962813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.368 [2024-12-09 23:34:15.962845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:52:35.368 [2024-12-09 23:34:15.962857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.251 ms 00:52:35.368 [2024-12-09 23:34:15.962864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.368 [2024-12-09 23:34:15.985517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.368 [2024-12-09 23:34:15.985654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:52:35.368 [2024-12-09 23:34:15.985670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:52:35.368 [2024-12-09 23:34:15.985688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.627 [2024-12-09 23:34:16.008494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.627 [2024-12-09 23:34:16.008628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:52:35.627 [2024-12-09 23:34:16.008643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.753 ms 00:52:35.627 [2024-12-09 23:34:16.008650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.627 [2024-12-09 23:34:16.008678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:52:35.627 [2024-12-09 23:34:16.008698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:35.627 [2024-12-09 23:34:16.008711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:52:35.627 [2024-12-09 23:34:16.008719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.008976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:52:35.627 [2024-12-09 23:34:16.009077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:52:35.628 [2024-12-09 23:34:16.009485] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:52:35.628 [2024-12-09 23:34:16.009492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ac8b18-ad26-45c3-9b69-18bf4f1a3d3e 00:52:35.628 [2024-12-09 23:34:16.009499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:52:35.628 [2024-12-09 23:34:16.009507] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:52:35.628 [2024-12-09 23:34:16.009513] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:52:35.628 [2024-12-09 23:34:16.009520] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:52:35.628 [2024-12-09 23:34:16.009533] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:52:35.628 [2024-12-09 23:34:16.009541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:52:35.628 [2024-12-09 23:34:16.009549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:52:35.628 [2024-12-09 23:34:16.009555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:52:35.628 [2024-12-09 23:34:16.009561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:52:35.628 [2024-12-09 23:34:16.009568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.628 [2024-12-09 23:34:16.009575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:52:35.628 [2024-12-09 23:34:16.009584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:52:35.628 [2024-12-09 23:34:16.009593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.021909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.628 [2024-12-09 23:34:16.021939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:52:35.628 [2024-12-09 23:34:16.021950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.300 ms 00:52:35.628 [2024-12-09 23:34:16.021959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.022328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:35.628 [2024-12-09 23:34:16.022347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:52:35.628 [2024-12-09 23:34:16.022355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:52:35.628 [2024-12-09 23:34:16.022363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.054636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.628 [2024-12-09 23:34:16.054680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:35.628 [2024-12-09 23:34:16.054690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.628 [2024-12-09 23:34:16.054698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.054758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.628 [2024-12-09 23:34:16.054770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:35.628 [2024-12-09 23:34:16.054778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.628 [2024-12-09 23:34:16.054785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.054841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.628 [2024-12-09 23:34:16.054851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:35.628 [2024-12-09 23:34:16.054859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.628 [2024-12-09 23:34:16.054866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.628 [2024-12-09 23:34:16.054881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.628 [2024-12-09 23:34:16.054888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:35.628 [2024-12-09 23:34:16.054898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.054906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.131042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.131095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:35.629 [2024-12-09 23:34:16.131107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.131116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:35.629 [2024-12-09 23:34:16.193103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:35.629 [2024-12-09 23:34:16.193196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:35.629 [2024-12-09 23:34:16.193251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:35.629 [2024-12-09 23:34:16.193369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:52:35.629 [2024-12-09 23:34:16.193419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:35.629 [2024-12-09 23:34:16.193480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:35.629 [2024-12-09 23:34:16.193537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:35.629 [2024-12-09 23:34:16.193544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:35.629 [2024-12-09 23:34:16.193554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:35.629 [2024-12-09 23:34:16.193661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.919 ms, result 0 00:52:36.564 00:52:36.564 00:52:36.564 23:34:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:52:38.474 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:52:38.474 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:52:38.474 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:52:38.474 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:52:38.475 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:52:38.732 Process with pid 80481 is not found 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80481 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80481 ']' 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80481 00:52:38.732 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80481) - No such process 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80481 is not found' 00:52:38.732 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:52:38.989 Remove shared memory files 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:52:38.989 ************************************ 00:52:38.989 END TEST ftl_dirty_shutdown 00:52:38.989 ************************************ 00:52:38.989 00:52:38.989 real 3m32.906s 00:52:38.989 user 3m56.503s 00:52:38.989 sys 0m27.144s 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:38.989 23:34:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:39.248 23:34:19 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:52:39.248 23:34:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:52:39.248 23:34:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:39.248 23:34:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:52:39.248 ************************************ 00:52:39.248 START TEST ftl_upgrade_shutdown 00:52:39.248 ************************************ 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:52:39.248 * Looking for test storage... 00:52:39.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:39.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:39.248 --rc genhtml_branch_coverage=1 00:52:39.248 --rc genhtml_function_coverage=1 00:52:39.248 --rc genhtml_legend=1 00:52:39.248 --rc geninfo_all_blocks=1 00:52:39.248 --rc geninfo_unexecuted_blocks=1 00:52:39.248 00:52:39.248 ' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:39.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:39.248 --rc genhtml_branch_coverage=1 00:52:39.248 --rc genhtml_function_coverage=1 00:52:39.248 --rc genhtml_legend=1 00:52:39.248 --rc geninfo_all_blocks=1 00:52:39.248 --rc geninfo_unexecuted_blocks=1 00:52:39.248 00:52:39.248 ' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:39.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:39.248 --rc genhtml_branch_coverage=1 00:52:39.248 --rc genhtml_function_coverage=1 00:52:39.248 --rc genhtml_legend=1 00:52:39.248 --rc geninfo_all_blocks=1 00:52:39.248 --rc geninfo_unexecuted_blocks=1 00:52:39.248 00:52:39.248 ' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:39.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:39.248 --rc genhtml_branch_coverage=1 00:52:39.248 --rc genhtml_function_coverage=1 00:52:39.248 --rc genhtml_legend=1 00:52:39.248 --rc geninfo_all_blocks=1 00:52:39.248 --rc geninfo_unexecuted_blocks=1 00:52:39.248 00:52:39.248 ' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82786 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82786 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82786 ']' 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:39.248 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:39.249 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:39.249 23:34:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:39.506 [2024-12-09 23:34:19.891412] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:52:39.506 [2024-12-09 23:34:19.891621] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82786 ] 00:52:39.506 [2024-12-09 23:34:20.054414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:39.764 [2024-12-09 23:34:20.175017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:40.328 23:34:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:40.328 23:34:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:52:40.328 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:40.328 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:52:40.329 23:34:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:52:40.586 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:52:40.843 { 00:52:40.843 "name": "basen1", 00:52:40.843 "aliases": [ 00:52:40.843 "744e7b2d-3669-4049-9212-62c8fdb3d3b6" 00:52:40.843 ], 00:52:40.843 "product_name": "NVMe disk", 00:52:40.843 "block_size": 4096, 00:52:40.843 "num_blocks": 1310720, 00:52:40.843 "uuid": "744e7b2d-3669-4049-9212-62c8fdb3d3b6", 00:52:40.843 "numa_id": -1, 00:52:40.843 "assigned_rate_limits": { 00:52:40.843 "rw_ios_per_sec": 0, 00:52:40.843 "rw_mbytes_per_sec": 0, 00:52:40.843 "r_mbytes_per_sec": 0, 00:52:40.843 "w_mbytes_per_sec": 0 00:52:40.843 }, 00:52:40.843 "claimed": true, 00:52:40.843 "claim_type": "read_many_write_one", 00:52:40.843 "zoned": false, 00:52:40.843 "supported_io_types": { 00:52:40.843 "read": true, 00:52:40.843 "write": true, 00:52:40.843 "unmap": true, 00:52:40.843 "flush": true, 00:52:40.843 "reset": true, 00:52:40.843 "nvme_admin": true, 00:52:40.843 "nvme_io": true, 00:52:40.843 "nvme_io_md": false, 00:52:40.843 "write_zeroes": true, 00:52:40.843 "zcopy": false, 00:52:40.843 "get_zone_info": false, 00:52:40.843 "zone_management": false, 00:52:40.843 "zone_append": false, 00:52:40.843 "compare": true, 00:52:40.843 "compare_and_write": false, 00:52:40.843 "abort": true, 00:52:40.843 "seek_hole": false, 00:52:40.843 "seek_data": false, 00:52:40.843 "copy": true, 00:52:40.843 "nvme_iov_md": false 00:52:40.843 }, 00:52:40.843 "driver_specific": { 00:52:40.843 "nvme": [ 00:52:40.843 { 00:52:40.843 "pci_address": "0000:00:11.0", 00:52:40.843 "trid": { 00:52:40.843 "trtype": "PCIe", 00:52:40.843 "traddr": "0000:00:11.0" 00:52:40.843 }, 00:52:40.843 "ctrlr_data": { 00:52:40.843 "cntlid": 0, 00:52:40.843 "vendor_id": "0x1b36", 00:52:40.843 "model_number": "QEMU NVMe Ctrl", 00:52:40.843 "serial_number": "12341", 00:52:40.843 "firmware_revision": "8.0.0", 00:52:40.843 "subnqn": "nqn.2019-08.org.qemu:12341", 00:52:40.843 "oacs": { 00:52:40.843 "security": 0, 00:52:40.843 "format": 1, 00:52:40.843 "firmware": 0, 00:52:40.843 "ns_manage": 1 00:52:40.843 }, 00:52:40.843 "multi_ctrlr": false, 00:52:40.843 "ana_reporting": false 00:52:40.843 }, 00:52:40.843 "vs": { 00:52:40.843 "nvme_version": "1.4" 00:52:40.843 }, 00:52:40.843 "ns_data": { 00:52:40.843 "id": 1, 00:52:40.843 "can_share": false 00:52:40.843 } 00:52:40.843 } 00:52:40.843 ], 00:52:40.843 "mp_policy": "active_passive" 00:52:40.843 } 00:52:40.843 } 00:52:40.843 ]' 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:52:40.843 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:52:41.100 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5814b886-6418-4c33-8d12-ba9c74cecda3 00:52:41.100 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:52:41.100 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5814b886-6418-4c33-8d12-ba9c74cecda3 00:52:41.357 23:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=a5a8afb9-fa23-4070-87fe-0c3a36ddd106 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u a5a8afb9-fa23-4070-87fe-0c3a36ddd106 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b014dc3e-6929-483e-8ddb-9a2e43296b87 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b014dc3e-6929-483e-8ddb-9a2e43296b87 ]] 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b014dc3e-6929-483e-8ddb-9a2e43296b87 5120 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b014dc3e-6929-483e-8ddb-9a2e43296b87 00:52:41.615 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b014dc3e-6929-483e-8ddb-9a2e43296b87 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b014dc3e-6929-483e-8ddb-9a2e43296b87 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b014dc3e-6929-483e-8ddb-9a2e43296b87 00:52:41.872 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:52:41.872 { 00:52:41.872 "name": "b014dc3e-6929-483e-8ddb-9a2e43296b87", 00:52:41.872 "aliases": [ 00:52:41.872 "lvs/basen1p0" 00:52:41.872 ], 00:52:41.872 "product_name": "Logical Volume", 00:52:41.872 "block_size": 4096, 00:52:41.872 "num_blocks": 5242880, 00:52:41.872 "uuid": "b014dc3e-6929-483e-8ddb-9a2e43296b87", 00:52:41.872 "assigned_rate_limits": { 00:52:41.872 "rw_ios_per_sec": 0, 00:52:41.872 "rw_mbytes_per_sec": 0, 00:52:41.872 "r_mbytes_per_sec": 0, 00:52:41.872 "w_mbytes_per_sec": 0 00:52:41.872 }, 00:52:41.872 "claimed": false, 00:52:41.872 "zoned": false, 00:52:41.872 "supported_io_types": { 00:52:41.872 "read": true, 00:52:41.872 "write": true, 00:52:41.872 "unmap": true, 00:52:41.872 "flush": false, 00:52:41.872 "reset": true, 00:52:41.872 "nvme_admin": false, 00:52:41.872 "nvme_io": false, 00:52:41.872 "nvme_io_md": false, 00:52:41.872 "write_zeroes": true, 00:52:41.872 "zcopy": false, 00:52:41.872 "get_zone_info": false, 00:52:41.872 "zone_management": false, 00:52:41.872 "zone_append": false, 00:52:41.872 "compare": false, 00:52:41.872 "compare_and_write": false, 00:52:41.872 "abort": false, 00:52:41.872 "seek_hole": true, 00:52:41.872 "seek_data": true, 00:52:41.872 "copy": false, 00:52:41.872 "nvme_iov_md": false 00:52:41.872 }, 00:52:41.872 "driver_specific": { 00:52:41.873 "lvol": { 00:52:41.873 "lvol_store_uuid": "a5a8afb9-fa23-4070-87fe-0c3a36ddd106", 00:52:41.873 "base_bdev": "basen1", 00:52:41.873 "thin_provision": true, 00:52:41.873 "num_allocated_clusters": 0, 00:52:41.873 "snapshot": false, 00:52:41.873 "clone": false, 00:52:41.873 "esnap_clone": false 00:52:41.873 } 00:52:41.873 } 00:52:41.873 } 00:52:41.873 ]' 00:52:41.873 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:52:41.873 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:52:41.873 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:52:42.211 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:52:42.468 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:52:42.468 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:52:42.468 23:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b014dc3e-6929-483e-8ddb-9a2e43296b87 -c cachen1p0 --l2p_dram_limit 2 00:52:42.726 [2024-12-09 23:34:23.172438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.172645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:42.726 [2024-12-09 23:34:23.172669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:42.726 [2024-12-09 23:34:23.172678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.172751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.172763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:42.726 [2024-12-09 23:34:23.172773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:52:42.726 [2024-12-09 23:34:23.172781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.172804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:42.726 [2024-12-09 23:34:23.173557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:42.726 [2024-12-09 23:34:23.173587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.173595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:42.726 [2024-12-09 23:34:23.173607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:52:42.726 [2024-12-09 23:34:23.173615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.173647] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a267ee6b-6a37-4ef3-b054-685c3a12fe75 00:52:42.726 [2024-12-09 23:34:23.174758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.174791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:52:42.726 [2024-12-09 23:34:23.174801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:52:42.726 [2024-12-09 23:34:23.174810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.179975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.180030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:42.726 [2024-12-09 23:34:23.180040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.088 ms 00:52:42.726 [2024-12-09 23:34:23.180049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.180088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.180098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:42.726 [2024-12-09 23:34:23.180106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:52:42.726 [2024-12-09 23:34:23.180117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.180166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.180177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:42.726 [2024-12-09 23:34:23.180188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:52:42.726 [2024-12-09 23:34:23.180197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.180218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:42.726 [2024-12-09 23:34:23.183815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.183932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:42.726 [2024-12-09 23:34:23.183951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.600 ms 00:52:42.726 [2024-12-09 23:34:23.183959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.184008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.184018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:42.726 [2024-12-09 23:34:23.184028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:42.726 [2024-12-09 23:34:23.184035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.184053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:52:42.726 [2024-12-09 23:34:23.184193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:52:42.726 [2024-12-09 23:34:23.184208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:42.726 [2024-12-09 23:34:23.184218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:52:42.726 [2024-12-09 23:34:23.184230] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184238] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:42.726 [2024-12-09 23:34:23.184255] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:42.726 [2024-12-09 23:34:23.184268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:52:42.726 [2024-12-09 23:34:23.184275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:52:42.726 [2024-12-09 23:34:23.184284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.184291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:42.726 [2024-12-09 23:34:23.184300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:52:42.726 [2024-12-09 23:34:23.184307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.184392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.726 [2024-12-09 23:34:23.184406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:42.726 [2024-12-09 23:34:23.184415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:52:42.726 [2024-12-09 23:34:23.184421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.726 [2024-12-09 23:34:23.184533] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:42.726 [2024-12-09 23:34:23.184543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:42.726 [2024-12-09 23:34:23.184552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:42.726 [2024-12-09 23:34:23.184576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:42.726 [2024-12-09 23:34:23.184591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:42.726 [2024-12-09 23:34:23.184600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:42.726 [2024-12-09 23:34:23.184606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:42.726 [2024-12-09 23:34:23.184621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:42.726 [2024-12-09 23:34:23.184631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:42.726 [2024-12-09 23:34:23.184647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:52:42.726 [2024-12-09 23:34:23.184653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:42.726 [2024-12-09 23:34:23.184669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:52:42.726 [2024-12-09 23:34:23.184677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:42.726 [2024-12-09 23:34:23.184692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:42.726 [2024-12-09 23:34:23.184698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:42.726 [2024-12-09 23:34:23.184714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:42.726 [2024-12-09 23:34:23.184722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:42.726 [2024-12-09 23:34:23.184737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:42.726 [2024-12-09 23:34:23.184743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:42.726 [2024-12-09 23:34:23.184758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:52:42.726 [2024-12-09 23:34:23.184766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:42.726 [2024-12-09 23:34:23.184782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:52:42.726 [2024-12-09 23:34:23.184788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:42.726 [2024-12-09 23:34:23.184802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:52:42.726 [2024-12-09 23:34:23.184810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:52:42.726 [2024-12-09 23:34:23.184826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.726 [2024-12-09 23:34:23.184841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:52:42.726 [2024-12-09 23:34:23.184848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:52:42.727 [2024-12-09 23:34:23.184856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.727 [2024-12-09 23:34:23.184862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:42.727 [2024-12-09 23:34:23.184871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:42.727 [2024-12-09 23:34:23.184879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:42.727 [2024-12-09 23:34:23.184887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:42.727 [2024-12-09 23:34:23.184895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:42.727 [2024-12-09 23:34:23.184905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:42.727 [2024-12-09 23:34:23.184912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:42.727 [2024-12-09 23:34:23.184920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:42.727 [2024-12-09 23:34:23.184927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:42.727 [2024-12-09 23:34:23.184935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:42.727 [2024-12-09 23:34:23.184943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:42.727 [2024-12-09 23:34:23.184956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.184966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:42.727 [2024-12-09 23:34:23.184975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.184993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:52:42.727 [2024-12-09 23:34:23.185010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:52:42.727 [2024-12-09 23:34:23.185019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:52:42.727 [2024-12-09 23:34:23.185026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:52:42.727 [2024-12-09 23:34:23.185035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:52:42.727 [2024-12-09 23:34:23.185093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:42.727 [2024-12-09 23:34:23.185102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:42.727 [2024-12-09 23:34:23.185121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:42.727 [2024-12-09 23:34:23.185128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:42.727 [2024-12-09 23:34:23.185137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:42.727 [2024-12-09 23:34:23.185145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.727 [2024-12-09 23:34:23.185154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:42.727 [2024-12-09 23:34:23.185162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:52:42.727 [2024-12-09 23:34:23.185170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.727 [2024-12-09 23:34:23.185213] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:52:42.727 [2024-12-09 23:34:23.185226] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:52:46.011 [2024-12-09 23:34:26.009964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.011 [2024-12-09 23:34:26.010031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:52:46.011 [2024-12-09 23:34:26.010046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2824.740 ms 00:52:46.011 [2024-12-09 23:34:26.010057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.011 [2024-12-09 23:34:26.035195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.011 [2024-12-09 23:34:26.035345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:46.011 [2024-12-09 23:34:26.035362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.934 ms 00:52:46.011 [2024-12-09 23:34:26.035372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.011 [2024-12-09 23:34:26.035441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.011 [2024-12-09 23:34:26.035453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:52:46.011 [2024-12-09 23:34:26.035462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:52:46.011 [2024-12-09 23:34:26.035475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.011 [2024-12-09 23:34:26.065759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.011 [2024-12-09 23:34:26.065795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:46.011 [2024-12-09 23:34:26.065805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.250 ms 00:52:46.011 [2024-12-09 23:34:26.065816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.011 [2024-12-09 23:34:26.065844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.065855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:46.012 [2024-12-09 23:34:26.065864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:46.012 [2024-12-09 23:34:26.065873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.066257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.066276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:46.012 [2024-12-09 23:34:26.066291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:52:46.012 [2024-12-09 23:34:26.066300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.066337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.066347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:46.012 [2024-12-09 23:34:26.066357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:52:46.012 [2024-12-09 23:34:26.066368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.080103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.080138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:46.012 [2024-12-09 23:34:26.080148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.718 ms 00:52:46.012 [2024-12-09 23:34:26.080158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.104289] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:52:46.012 [2024-12-09 23:34:26.105265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.105297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:52:46.012 [2024-12-09 23:34:26.105314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.036 ms 00:52:46.012 [2024-12-09 23:34:26.105324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.129483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.129518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:52:46.012 [2024-12-09 23:34:26.129532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.117 ms 00:52:46.012 [2024-12-09 23:34:26.129541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.129621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.129634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:52:46.012 [2024-12-09 23:34:26.129647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:52:46.012 [2024-12-09 23:34:26.129655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.152255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.152289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:52:46.012 [2024-12-09 23:34:26.152302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.549 ms 00:52:46.012 [2024-12-09 23:34:26.152312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.174544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.174575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:52:46.012 [2024-12-09 23:34:26.174587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.193 ms 00:52:46.012 [2024-12-09 23:34:26.174595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.175172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.175183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:52:46.012 [2024-12-09 23:34:26.175193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:52:46.012 [2024-12-09 23:34:26.175203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.245452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.245496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:52:46.012 [2024-12-09 23:34:26.245514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.214 ms 00:52:46.012 [2024-12-09 23:34:26.245522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.269857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.270000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:52:46.012 [2024-12-09 23:34:26.270021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.259 ms 00:52:46.012 [2024-12-09 23:34:26.270029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.293959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.294010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:52:46.012 [2024-12-09 23:34:26.294025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.685 ms 00:52:46.012 [2024-12-09 23:34:26.294032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.317182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.317218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:52:46.012 [2024-12-09 23:34:26.317231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.113 ms 00:52:46.012 [2024-12-09 23:34:26.317239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.317278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.317288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:52:46.012 [2024-12-09 23:34:26.317300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:52:46.012 [2024-12-09 23:34:26.317307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.317379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:46.012 [2024-12-09 23:34:26.317391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:52:46.012 [2024-12-09 23:34:26.317401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:52:46.012 [2024-12-09 23:34:26.317408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:46.012 [2024-12-09 23:34:26.318677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3145.750 ms, result 0 00:52:46.012 { 00:52:46.012 "name": "ftl", 00:52:46.012 "uuid": "a267ee6b-6a37-4ef3-b054-685c3a12fe75" 00:52:46.012 } 00:52:46.012 23:34:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:52:46.012 [2024-12-09 23:34:26.533759] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:46.012 23:34:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:52:46.270 23:34:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:52:46.528 [2024-12-09 23:34:26.930141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:52:46.528 23:34:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:52:46.528 [2024-12-09 23:34:27.122557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:46.528 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:52:47.093 Fill FTL, iteration 1 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82903 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82903 /var/tmp/spdk.tgt.sock 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82903 ']' 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:52:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:47.093 23:34:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:47.093 [2024-12-09 23:34:27.544156] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:52:47.093 [2024-12-09 23:34:27.544383] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82903 ] 00:52:47.093 [2024-12-09 23:34:27.704325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:47.351 [2024-12-09 23:34:27.801681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:47.916 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:47.916 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:52:47.916 23:34:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:52:48.174 ftln1 00:52:48.174 23:34:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:52:48.174 23:34:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82903 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82903 ']' 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82903 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82903 00:52:48.432 killing process with pid 82903 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82903' 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82903 00:52:48.432 23:34:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82903 00:52:49.807 23:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:52:49.807 23:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:52:49.807 [2024-12-09 23:34:30.334932] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:52:49.807 [2024-12-09 23:34:30.335063] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82950 ] 00:52:50.064 [2024-12-09 23:34:30.491179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:50.064 [2024-12-09 23:34:30.567872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:51.435  [2024-12-09T23:34:33.005Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-09T23:34:33.941Z] Copying: 519/1024 [MB] (260 MBps) [2024-12-09T23:34:34.878Z] Copying: 777/1024 [MB] (258 MBps) [2024-12-09T23:34:35.447Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:52:54.811 00:52:54.811 Calculate MD5 checksum, iteration 1 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:54.811 23:34:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:55.071 [2024-12-09 23:34:35.506951] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:52:55.071 [2024-12-09 23:34:35.507932] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83005 ] 00:52:55.071 [2024-12-09 23:34:35.679742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:55.331 [2024-12-09 23:34:35.768395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:56.708  [2024-12-09T23:34:37.916Z] Copying: 678/1024 [MB] (678 MBps) [2024-12-09T23:34:38.489Z] Copying: 1024/1024 [MB] (average 645 MBps) 00:52:57.853 00:52:57.853 23:34:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:52:57.853 23:34:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d88fd63d2c50505fd6d882bc4a4c98f3 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:53:00.405 Fill FTL, iteration 2 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:00.405 23:34:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:53:00.405 [2024-12-09 23:34:40.592051] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:00.405 [2024-12-09 23:34:40.592285] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83061 ] 00:53:00.405 [2024-12-09 23:34:40.750682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:00.405 [2024-12-09 23:34:40.863617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:01.788  [2024-12-09T23:34:43.367Z] Copying: 182/1024 [MB] (182 MBps) [2024-12-09T23:34:44.303Z] Copying: 358/1024 [MB] (176 MBps) [2024-12-09T23:34:45.675Z] Copying: 576/1024 [MB] (218 MBps) [2024-12-09T23:34:46.240Z] Copying: 802/1024 [MB] (226 MBps) [2024-12-09T23:34:47.181Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:53:06.545 00:53:06.545 Calculate MD5 checksum, iteration 2 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:06.545 23:34:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:06.545 [2024-12-09 23:34:47.056313] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:06.545 [2024-12-09 23:34:47.056460] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83129 ] 00:53:06.805 [2024-12-09 23:34:47.218806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:06.805 [2024-12-09 23:34:47.348513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:08.720  [2024-12-09T23:34:49.927Z] Copying: 521/1024 [MB] (521 MBps) [2024-12-09T23:34:51.314Z] Copying: 1024/1024 [MB] (average 520 MBps) 00:53:10.678 00:53:10.678 23:34:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:53:10.678 23:34:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=281ea6fc5ff6c8d896d2013d8b50459c 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:53:13.227 [2024-12-09 23:34:53.492203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.227 [2024-12-09 23:34:53.492244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:53:13.227 [2024-12-09 23:34:53.492256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:13.227 [2024-12-09 23:34:53.492262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.227 [2024-12-09 23:34:53.492280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.227 [2024-12-09 23:34:53.492290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:53:13.227 [2024-12-09 23:34:53.492297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:13.227 [2024-12-09 23:34:53.492303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.227 [2024-12-09 23:34:53.492318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.227 [2024-12-09 23:34:53.492324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:53:13.227 [2024-12-09 23:34:53.492331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:13.227 [2024-12-09 23:34:53.492336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.227 [2024-12-09 23:34:53.492386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.172 ms, result 0 00:53:13.227 true 00:53:13.227 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:13.227 { 00:53:13.227 "name": "ftl", 00:53:13.227 "properties": [ 00:53:13.227 { 00:53:13.227 "name": "superblock_version", 00:53:13.227 "value": 5, 00:53:13.227 "read-only": true 00:53:13.227 }, 00:53:13.227 { 00:53:13.227 "name": "base_device", 00:53:13.227 "bands": [ 00:53:13.227 { 00:53:13.227 "id": 0, 00:53:13.227 "state": "FREE", 00:53:13.227 "validity": 0.0 00:53:13.227 }, 00:53:13.227 { 00:53:13.228 "id": 1, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 2, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 3, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 4, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 5, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 6, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 7, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 8, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 9, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 10, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 11, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 12, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 13, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 14, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 15, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 16, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 17, 00:53:13.228 "state": "FREE", 00:53:13.228 "validity": 0.0 00:53:13.228 } 00:53:13.228 ], 00:53:13.228 "read-only": true 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "name": "cache_device", 00:53:13.228 "type": "bdev", 00:53:13.228 "chunks": [ 00:53:13.228 { 00:53:13.228 "id": 0, 00:53:13.228 "state": "INACTIVE", 00:53:13.228 "utilization": 0.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 1, 00:53:13.228 "state": "CLOSED", 00:53:13.228 "utilization": 1.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 2, 00:53:13.228 "state": "CLOSED", 00:53:13.228 "utilization": 1.0 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 3, 00:53:13.228 "state": "OPEN", 00:53:13.228 "utilization": 0.001953125 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "id": 4, 00:53:13.228 "state": "OPEN", 00:53:13.228 "utilization": 0.0 00:53:13.228 } 00:53:13.228 ], 00:53:13.228 "read-only": true 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "name": "verbose_mode", 00:53:13.228 "value": true, 00:53:13.228 "unit": "", 00:53:13.228 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:53:13.228 }, 00:53:13.228 { 00:53:13.228 "name": "prep_upgrade_on_shutdown", 00:53:13.228 "value": false, 00:53:13.228 "unit": "", 00:53:13.228 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:53:13.228 } 00:53:13.228 ] 00:53:13.228 } 00:53:13.228 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:53:13.489 [2024-12-09 23:34:53.884519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.489 [2024-12-09 23:34:53.884681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:53:13.489 [2024-12-09 23:34:53.884740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:53:13.489 [2024-12-09 23:34:53.884759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.489 [2024-12-09 23:34:53.884792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.489 [2024-12-09 23:34:53.884809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:53:13.489 [2024-12-09 23:34:53.884855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:13.489 [2024-12-09 23:34:53.884872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.489 [2024-12-09 23:34:53.884897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.489 [2024-12-09 23:34:53.884914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:53:13.489 [2024-12-09 23:34:53.884966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:13.489 [2024-12-09 23:34:53.884980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.489 [2024-12-09 23:34:53.885103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.570 ms, result 0 00:53:13.489 true 00:53:13.489 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:53:13.489 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:53:13.489 23:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:13.489 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:53:13.489 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:53:13.489 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:53:13.766 [2024-12-09 23:34:54.292880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.766 [2024-12-09 23:34:54.293017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:53:13.766 [2024-12-09 23:34:54.293068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:13.766 [2024-12-09 23:34:54.293087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.766 [2024-12-09 23:34:54.293133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.766 [2024-12-09 23:34:54.293153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:53:13.766 [2024-12-09 23:34:54.293169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:13.766 [2024-12-09 23:34:54.293183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.766 [2024-12-09 23:34:54.293200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:13.766 [2024-12-09 23:34:54.293207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:53:13.766 [2024-12-09 23:34:54.293213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:13.766 [2024-12-09 23:34:54.293219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:13.766 [2024-12-09 23:34:54.293269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.374 ms, result 0 00:53:13.766 true 00:53:13.766 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:14.061 { 00:53:14.061 "name": "ftl", 00:53:14.061 "properties": [ 00:53:14.061 { 00:53:14.061 "name": "superblock_version", 00:53:14.061 "value": 5, 00:53:14.062 "read-only": true 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "name": "base_device", 00:53:14.062 "bands": [ 00:53:14.062 { 00:53:14.062 "id": 0, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 1, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 2, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 3, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 4, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 5, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 6, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 7, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 8, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 9, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 10, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 11, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 12, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 13, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 14, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 15, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 16, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 17, 00:53:14.062 "state": "FREE", 00:53:14.062 "validity": 0.0 00:53:14.062 } 00:53:14.062 ], 00:53:14.062 "read-only": true 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "name": "cache_device", 00:53:14.062 "type": "bdev", 00:53:14.062 "chunks": [ 00:53:14.062 { 00:53:14.062 "id": 0, 00:53:14.062 "state": "INACTIVE", 00:53:14.062 "utilization": 0.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 1, 00:53:14.062 "state": "CLOSED", 00:53:14.062 "utilization": 1.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 2, 00:53:14.062 "state": "CLOSED", 00:53:14.062 "utilization": 1.0 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 3, 00:53:14.062 "state": "OPEN", 00:53:14.062 "utilization": 0.001953125 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "id": 4, 00:53:14.062 "state": "OPEN", 00:53:14.062 "utilization": 0.0 00:53:14.062 } 00:53:14.062 ], 00:53:14.062 "read-only": true 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "name": "verbose_mode", 00:53:14.062 "value": true, 00:53:14.062 "unit": "", 00:53:14.062 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:53:14.062 }, 00:53:14.062 { 00:53:14.062 "name": "prep_upgrade_on_shutdown", 00:53:14.062 "value": true, 00:53:14.062 "unit": "", 00:53:14.062 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:53:14.062 } 00:53:14.062 ] 00:53:14.062 } 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82786 ]] 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82786 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82786 ']' 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82786 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82786 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82786' 00:53:14.062 killing process with pid 82786 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82786 00:53:14.062 23:34:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82786 00:53:14.637 [2024-12-09 23:34:55.067430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:53:14.637 [2024-12-09 23:34:55.079275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:14.637 [2024-12-09 23:34:55.079398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:53:14.637 [2024-12-09 23:34:55.079413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:14.637 [2024-12-09 23:34:55.079420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:14.637 [2024-12-09 23:34:55.079441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:53:14.637 [2024-12-09 23:34:55.081567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:14.637 [2024-12-09 23:34:55.081591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:53:14.637 [2024-12-09 23:34:55.081600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.115 ms 00:53:14.637 [2024-12-09 23:34:55.081610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.010438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.010485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:53:22.778 [2024-12-09 23:35:03.010500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7928.785 ms 00:53:22.778 [2024-12-09 23:35:03.010507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.011499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.011517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:53:22.778 [2024-12-09 23:35:03.011525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.979 ms 00:53:22.778 [2024-12-09 23:35:03.011531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.012416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.012521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:53:22.778 [2024-12-09 23:35:03.012535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.866 ms 00:53:22.778 [2024-12-09 23:35:03.012546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.020219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.020248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:53:22.778 [2024-12-09 23:35:03.020255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.635 ms 00:53:22.778 [2024-12-09 23:35:03.020261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.025508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.025536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:53:22.778 [2024-12-09 23:35:03.025544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.219 ms 00:53:22.778 [2024-12-09 23:35:03.025551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.025617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.025630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:53:22.778 [2024-12-09 23:35:03.025638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:53:22.778 [2024-12-09 23:35:03.025644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.032569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.032596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:53:22.778 [2024-12-09 23:35:03.032603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.912 ms 00:53:22.778 [2024-12-09 23:35:03.032609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.039313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.039421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:53:22.778 [2024-12-09 23:35:03.039433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.678 ms 00:53:22.778 [2024-12-09 23:35:03.039439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.046515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.046614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:53:22.778 [2024-12-09 23:35:03.046626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.051 ms 00:53:22.778 [2024-12-09 23:35:03.046631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.053492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.778 [2024-12-09 23:35:03.053587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:53:22.778 [2024-12-09 23:35:03.053599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.814 ms 00:53:22.778 [2024-12-09 23:35:03.053605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.778 [2024-12-09 23:35:03.053627] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:53:22.779 [2024-12-09 23:35:03.053644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:53:22.779 [2024-12-09 23:35:03.053652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:53:22.779 [2024-12-09 23:35:03.053658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:53:22.779 [2024-12-09 23:35:03.053664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:53:22.779 [2024-12-09 23:35:03.053762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:53:22.779 [2024-12-09 23:35:03.053768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a267ee6b-6a37-4ef3-b054-685c3a12fe75 00:53:22.779 [2024-12-09 23:35:03.053774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:53:22.779 [2024-12-09 23:35:03.053779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:53:22.779 [2024-12-09 23:35:03.053784] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:53:22.779 [2024-12-09 23:35:03.053790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:53:22.779 [2024-12-09 23:35:03.053798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:53:22.779 [2024-12-09 23:35:03.053804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:53:22.779 [2024-12-09 23:35:03.053812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:53:22.779 [2024-12-09 23:35:03.053816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:53:22.779 [2024-12-09 23:35:03.053821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:53:22.779 [2024-12-09 23:35:03.053828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.779 [2024-12-09 23:35:03.053834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:53:22.779 [2024-12-09 23:35:03.053842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:53:22.779 [2024-12-09 23:35:03.053847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.063224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.779 [2024-12-09 23:35:03.063249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:53:22.779 [2024-12-09 23:35:03.063261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.364 ms 00:53:22.779 [2024-12-09 23:35:03.063267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.063535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:22.779 [2024-12-09 23:35:03.063546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:53:22.779 [2024-12-09 23:35:03.063552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:53:22.779 [2024-12-09 23:35:03.063558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.096621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.096734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:22.779 [2024-12-09 23:35:03.096746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.096753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.096779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.096786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:22.779 [2024-12-09 23:35:03.096792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.096798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.096850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.096857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:22.779 [2024-12-09 23:35:03.096867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.096873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.096885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.096892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:22.779 [2024-12-09 23:35:03.096898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.096903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.156192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.156245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:22.779 [2024-12-09 23:35:03.156260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.156267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.204459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.204498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:22.779 [2024-12-09 23:35:03.204506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.779 [2024-12-09 23:35:03.204512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.779 [2024-12-09 23:35:03.204581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.779 [2024-12-09 23:35:03.204589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:22.780 [2024-12-09 23:35:03.204595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.204637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.780 [2024-12-09 23:35:03.204644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:22.780 [2024-12-09 23:35:03.204651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.204723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.780 [2024-12-09 23:35:03.204731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:22.780 [2024-12-09 23:35:03.204737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.204768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.780 [2024-12-09 23:35:03.204774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:53:22.780 [2024-12-09 23:35:03.204780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.204817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.780 [2024-12-09 23:35:03.204824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:22.780 [2024-12-09 23:35:03.204830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.204872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:22.780 [2024-12-09 23:35:03.204880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:22.780 [2024-12-09 23:35:03.204886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:22.780 [2024-12-09 23:35:03.204892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:22.780 [2024-12-09 23:35:03.205001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8125.666 ms, result 0 00:53:32.765 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83327 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83327 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83327 ']' 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:32.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:32.766 23:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:32.766 [2024-12-09 23:35:11.969456] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:32.766 [2024-12-09 23:35:11.969574] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83327 ] 00:53:32.766 [2024-12-09 23:35:12.123132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:32.766 [2024-12-09 23:35:12.200105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:32.766 [2024-12-09 23:35:12.783250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:32.766 [2024-12-09 23:35:12.783475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:32.766 [2024-12-09 23:35:12.926464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.926597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:53:32.766 [2024-12-09 23:35:12.926614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:32.766 [2024-12-09 23:35:12.926621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.926668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.926677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:32.766 [2024-12-09 23:35:12.926683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:53:32.766 [2024-12-09 23:35:12.926689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.926709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:53:32.766 [2024-12-09 23:35:12.927224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:53:32.766 [2024-12-09 23:35:12.927237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.927243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:32.766 [2024-12-09 23:35:12.927250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:53:32.766 [2024-12-09 23:35:12.927256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.928226] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:53:32.766 [2024-12-09 23:35:12.937926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.938098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:53:32.766 [2024-12-09 23:35:12.938125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.702 ms 00:53:32.766 [2024-12-09 23:35:12.938132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.938174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.938181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:53:32.766 [2024-12-09 23:35:12.938187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:53:32.766 [2024-12-09 23:35:12.938193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.942679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.942705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:32.766 [2024-12-09 23:35:12.942713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.438 ms 00:53:32.766 [2024-12-09 23:35:12.942719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.942761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.942768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:32.766 [2024-12-09 23:35:12.942774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:53:32.766 [2024-12-09 23:35:12.942780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.942816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.942825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:53:32.766 [2024-12-09 23:35:12.942831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:32.766 [2024-12-09 23:35:12.942837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.942852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:53:32.766 [2024-12-09 23:35:12.945452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.945558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:32.766 [2024-12-09 23:35:12.945571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.604 ms 00:53:32.766 [2024-12-09 23:35:12.945581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.945610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.945617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:53:32.766 [2024-12-09 23:35:12.945623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:53:32.766 [2024-12-09 23:35:12.945628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.945645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:53:32.766 [2024-12-09 23:35:12.945662] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:53:32.766 [2024-12-09 23:35:12.945698] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:53:32.766 [2024-12-09 23:35:12.945709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:53:32.766 [2024-12-09 23:35:12.945788] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:53:32.766 [2024-12-09 23:35:12.945796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:53:32.766 [2024-12-09 23:35:12.945805] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:53:32.766 [2024-12-09 23:35:12.945812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:53:32.766 [2024-12-09 23:35:12.945819] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:53:32.766 [2024-12-09 23:35:12.945827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:53:32.766 [2024-12-09 23:35:12.945832] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:53:32.766 [2024-12-09 23:35:12.945838] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:53:32.766 [2024-12-09 23:35:12.945844] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:53:32.766 [2024-12-09 23:35:12.945850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.945855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:53:32.766 [2024-12-09 23:35:12.945861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:53:32.766 [2024-12-09 23:35:12.945866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.945930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.766 [2024-12-09 23:35:12.945936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:53:32.766 [2024-12-09 23:35:12.945944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:53:32.766 [2024-12-09 23:35:12.945949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.766 [2024-12-09 23:35:12.946042] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:53:32.766 [2024-12-09 23:35:12.946050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:53:32.766 [2024-12-09 23:35:12.946056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:32.766 [2024-12-09 23:35:12.946062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:53:32.766 [2024-12-09 23:35:12.946073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:53:32.766 [2024-12-09 23:35:12.946084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:53:32.766 [2024-12-09 23:35:12.946090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:53:32.766 [2024-12-09 23:35:12.946095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:53:32.766 [2024-12-09 23:35:12.946105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:53:32.766 [2024-12-09 23:35:12.946111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:53:32.766 [2024-12-09 23:35:12.946121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:53:32.766 [2024-12-09 23:35:12.946126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:53:32.766 [2024-12-09 23:35:12.946136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:53:32.766 [2024-12-09 23:35:12.946141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.766 [2024-12-09 23:35:12.946147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:53:32.766 [2024-12-09 23:35:12.946152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:53:32.766 [2024-12-09 23:35:12.946156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:32.766 [2024-12-09 23:35:12.946161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:53:32.767 [2024-12-09 23:35:12.946171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:53:32.767 [2024-12-09 23:35:12.946176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:53:32.767 [2024-12-09 23:35:12.946185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:53:32.767 [2024-12-09 23:35:12.946190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:53:32.767 [2024-12-09 23:35:12.946200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:53:32.767 [2024-12-09 23:35:12.946205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:53:32.767 [2024-12-09 23:35:12.946215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:53:32.767 [2024-12-09 23:35:12.946219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:53:32.767 [2024-12-09 23:35:12.946229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:53:32.767 [2024-12-09 23:35:12.946244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:53:32.767 [2024-12-09 23:35:12.946259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:53:32.767 [2024-12-09 23:35:12.946263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946268] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:53:32.767 [2024-12-09 23:35:12.946275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:53:32.767 [2024-12-09 23:35:12.946280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:32.767 [2024-12-09 23:35:12.946293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:53:32.767 [2024-12-09 23:35:12.946298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:53:32.767 [2024-12-09 23:35:12.946303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:53:32.767 [2024-12-09 23:35:12.946308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:53:32.767 [2024-12-09 23:35:12.946313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:53:32.767 [2024-12-09 23:35:12.946318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:53:32.767 [2024-12-09 23:35:12.946324] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:53:32.767 [2024-12-09 23:35:12.946331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:53:32.767 [2024-12-09 23:35:12.946343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:53:32.767 [2024-12-09 23:35:12.946359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:53:32.767 [2024-12-09 23:35:12.946365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:53:32.767 [2024-12-09 23:35:12.946370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:53:32.767 [2024-12-09 23:35:12.946375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:53:32.767 [2024-12-09 23:35:12.946412] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:53:32.767 [2024-12-09 23:35:12.946418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:32.767 [2024-12-09 23:35:12.946430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:53:32.767 [2024-12-09 23:35:12.946435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:53:32.767 [2024-12-09 23:35:12.946440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:53:32.767 [2024-12-09 23:35:12.946446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:32.767 [2024-12-09 23:35:12.946454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:53:32.767 [2024-12-09 23:35:12.946460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:53:32.767 [2024-12-09 23:35:12.946467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:32.767 [2024-12-09 23:35:12.946498] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:53:32.767 [2024-12-09 23:35:12.946506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:53:36.059 [2024-12-09 23:35:16.139669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.139745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:53:36.059 [2024-12-09 23:35:16.139765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3193.157 ms 00:53:36.059 [2024-12-09 23:35:16.139775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.171826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.171894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:36.059 [2024-12-09 23:35:16.171911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.778 ms 00:53:36.059 [2024-12-09 23:35:16.171920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.172052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.172073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:36.059 [2024-12-09 23:35:16.172084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:53:36.059 [2024-12-09 23:35:16.172092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.207486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.207536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:36.059 [2024-12-09 23:35:16.207553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.349 ms 00:53:36.059 [2024-12-09 23:35:16.207562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.207614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.207624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:36.059 [2024-12-09 23:35:16.207633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:36.059 [2024-12-09 23:35:16.207642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.208276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.208302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:36.059 [2024-12-09 23:35:16.208314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:53:36.059 [2024-12-09 23:35:16.208322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.208377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.208387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:36.059 [2024-12-09 23:35:16.208397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:53:36.059 [2024-12-09 23:35:16.208406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.227029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.227081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:36.059 [2024-12-09 23:35:16.227094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.594 ms 00:53:36.059 [2024-12-09 23:35:16.227103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.253949] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:53:36.059 [2024-12-09 23:35:16.254023] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:53:36.059 [2024-12-09 23:35:16.254041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.254051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:53:36.059 [2024-12-09 23:35:16.254062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.788 ms 00:53:36.059 [2024-12-09 23:35:16.254069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.268819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.268886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:53:36.059 [2024-12-09 23:35:16.268898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.686 ms 00:53:36.059 [2024-12-09 23:35:16.268907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.281510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.281557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:53:36.059 [2024-12-09 23:35:16.281569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.545 ms 00:53:36.059 [2024-12-09 23:35:16.281577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.294039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.294084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:53:36.059 [2024-12-09 23:35:16.294097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.410 ms 00:53:36.059 [2024-12-09 23:35:16.294104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.294762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.294788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:36.059 [2024-12-09 23:35:16.294798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:53:36.059 [2024-12-09 23:35:16.294806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.359815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.360087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:53:36.059 [2024-12-09 23:35:16.360115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.983 ms 00:53:36.059 [2024-12-09 23:35:16.360125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.371504] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:36.059 [2024-12-09 23:35:16.372401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.372436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:36.059 [2024-12-09 23:35:16.372447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.219 ms 00:53:36.059 [2024-12-09 23:35:16.372454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.372549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.372563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:53:36.059 [2024-12-09 23:35:16.372572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:53:36.059 [2024-12-09 23:35:16.372580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.372636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.372647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:36.059 [2024-12-09 23:35:16.372655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:53:36.059 [2024-12-09 23:35:16.372663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.372685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.372694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:36.059 [2024-12-09 23:35:16.372706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:53:36.059 [2024-12-09 23:35:16.372714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.372747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:53:36.059 [2024-12-09 23:35:16.372756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.372764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:53:36.059 [2024-12-09 23:35:16.372773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:53:36.059 [2024-12-09 23:35:16.372780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.396733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.396777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:53:36.059 [2024-12-09 23:35:16.396788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.933 ms 00:53:36.059 [2024-12-09 23:35:16.396796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.396869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.059 [2024-12-09 23:35:16.396879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:36.059 [2024-12-09 23:35:16.396888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:53:36.059 [2024-12-09 23:35:16.396895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.059 [2024-12-09 23:35:16.397885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3470.969 ms, result 0 00:53:36.059 [2024-12-09 23:35:16.413119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:36.059 [2024-12-09 23:35:16.429095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:36.059 [2024-12-09 23:35:16.437232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:36.059 23:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:36.059 23:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:36.059 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:36.060 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:53:36.060 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:53:36.060 [2024-12-09 23:35:16.665331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.060 [2024-12-09 23:35:16.665522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:53:36.060 [2024-12-09 23:35:16.665593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:53:36.060 [2024-12-09 23:35:16.665617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.060 [2024-12-09 23:35:16.665660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.060 [2024-12-09 23:35:16.665694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:53:36.060 [2024-12-09 23:35:16.665715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:36.060 [2024-12-09 23:35:16.665734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.060 [2024-12-09 23:35:16.665766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:36.060 [2024-12-09 23:35:16.665787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:53:36.060 [2024-12-09 23:35:16.665809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:36.060 [2024-12-09 23:35:16.665860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:36.060 [2024-12-09 23:35:16.665946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.599 ms, result 0 00:53:36.060 true 00:53:36.060 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:36.320 { 00:53:36.320 "name": "ftl", 00:53:36.320 "properties": [ 00:53:36.320 { 00:53:36.320 "name": "superblock_version", 00:53:36.320 "value": 5, 00:53:36.320 "read-only": true 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "name": "base_device", 00:53:36.320 "bands": [ 00:53:36.320 { 00:53:36.320 "id": 0, 00:53:36.320 "state": "CLOSED", 00:53:36.320 "validity": 1.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 1, 00:53:36.320 "state": "CLOSED", 00:53:36.320 "validity": 1.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 2, 00:53:36.320 "state": "CLOSED", 00:53:36.320 "validity": 0.007843137254901933 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 3, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 4, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 5, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 6, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 7, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 8, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 9, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 10, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 11, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 12, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 13, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 14, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 15, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 16, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 17, 00:53:36.320 "state": "FREE", 00:53:36.320 "validity": 0.0 00:53:36.320 } 00:53:36.320 ], 00:53:36.320 "read-only": true 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "name": "cache_device", 00:53:36.320 "type": "bdev", 00:53:36.320 "chunks": [ 00:53:36.320 { 00:53:36.320 "id": 0, 00:53:36.320 "state": "INACTIVE", 00:53:36.320 "utilization": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 1, 00:53:36.320 "state": "OPEN", 00:53:36.320 "utilization": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 2, 00:53:36.320 "state": "OPEN", 00:53:36.320 "utilization": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 3, 00:53:36.320 "state": "FREE", 00:53:36.320 "utilization": 0.0 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "id": 4, 00:53:36.320 "state": "FREE", 00:53:36.320 "utilization": 0.0 00:53:36.320 } 00:53:36.320 ], 00:53:36.320 "read-only": true 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "name": "verbose_mode", 00:53:36.320 "value": true, 00:53:36.320 "unit": "", 00:53:36.320 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:53:36.320 }, 00:53:36.320 { 00:53:36.320 "name": "prep_upgrade_on_shutdown", 00:53:36.320 "value": false, 00:53:36.320 "unit": "", 00:53:36.320 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:53:36.320 } 00:53:36.320 ] 00:53:36.320 } 00:53:36.320 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:53:36.320 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:36.321 23:35:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:53:36.582 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:53:36.582 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:53:36.582 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:53:36.582 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:53:36.582 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:36.843 Validate MD5 checksum, iteration 1 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:36.843 23:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:36.843 [2024-12-09 23:35:17.419776] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:36.843 [2024-12-09 23:35:17.419920] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83400 ] 00:53:37.105 [2024-12-09 23:35:17.581746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:37.105 [2024-12-09 23:35:17.708501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:39.026  [2024-12-09T23:35:20.236Z] Copying: 594/1024 [MB] (594 MBps) [2024-12-09T23:35:21.624Z] Copying: 1024/1024 [MB] (average 589 MBps) 00:53:40.988 00:53:40.988 23:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:53:40.988 23:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:43.539 Validate MD5 checksum, iteration 2 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d88fd63d2c50505fd6d882bc4a4c98f3 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d88fd63d2c50505fd6d882bc4a4c98f3 != \d\8\8\f\d\6\3\d\2\c\5\0\5\0\5\f\d\6\d\8\8\2\b\c\4\a\4\c\9\8\f\3 ]] 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:43.539 23:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:43.539 [2024-12-09 23:35:23.617500] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:43.539 [2024-12-09 23:35:23.617689] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83476 ] 00:53:43.539 [2024-12-09 23:35:23.778280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:43.539 [2024-12-09 23:35:23.904213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:44.925  [2024-12-09T23:35:26.166Z] Copying: 673/1024 [MB] (673 MBps) [2024-12-09T23:35:30.356Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:53:49.720 00:53:49.720 23:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:53:49.720 23:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:51.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=281ea6fc5ff6c8d896d2013d8b50459c 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 281ea6fc5ff6c8d896d2013d8b50459c != \2\8\1\e\a\6\f\c\5\f\f\6\c\8\d\8\9\6\d\2\0\1\3\d\8\b\5\0\4\5\9\c ]] 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83327 ]] 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83327 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83567 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83567 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83567 ']' 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:51.621 23:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:51.621 [2024-12-09 23:35:31.995239] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:51.621 [2024-12-09 23:35:31.995370] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83567 ] 00:53:51.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83327 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:53:51.621 [2024-12-09 23:35:32.142115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:51.621 [2024-12-09 23:35:32.222798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:52.186 [2024-12-09 23:35:32.801545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:52.186 [2024-12-09 23:35:32.801599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:52.447 [2024-12-09 23:35:32.944436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.944474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:53:52.447 [2024-12-09 23:35:32.944484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:52.447 [2024-12-09 23:35:32.944490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.944534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.944543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:52.447 [2024-12-09 23:35:32.944549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:53:52.447 [2024-12-09 23:35:32.944555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.944573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:53:52.447 [2024-12-09 23:35:32.945099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:53:52.447 [2024-12-09 23:35:32.945118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.945124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:52.447 [2024-12-09 23:35:32.945131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:53:52.447 [2024-12-09 23:35:32.945137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.945361] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:53:52.447 [2024-12-09 23:35:32.957915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.957942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:53:52.447 [2024-12-09 23:35:32.957952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.555 ms 00:53:52.447 [2024-12-09 23:35:32.957959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.964732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.964759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:53:52.447 [2024-12-09 23:35:32.964767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:53:52.447 [2024-12-09 23:35:32.964773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.965020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.965029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:52.447 [2024-12-09 23:35:32.965035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:53:52.447 [2024-12-09 23:35:32.965041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.447 [2024-12-09 23:35:32.965080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.447 [2024-12-09 23:35:32.965087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:52.447 [2024-12-09 23:35:32.965094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:53:52.447 [2024-12-09 23:35:32.965099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.965118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.448 [2024-12-09 23:35:32.965125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:53:52.448 [2024-12-09 23:35:32.965131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:52.448 [2024-12-09 23:35:32.965136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.965151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:53:52.448 [2024-12-09 23:35:32.967512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.448 [2024-12-09 23:35:32.967532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:52.448 [2024-12-09 23:35:32.967539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.363 ms 00:53:52.448 [2024-12-09 23:35:32.967545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.967568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.448 [2024-12-09 23:35:32.967575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:53:52.448 [2024-12-09 23:35:32.967581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:52.448 [2024-12-09 23:35:32.967587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.967603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:53:52.448 [2024-12-09 23:35:32.967618] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:53:52.448 [2024-12-09 23:35:32.967644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:53:52.448 [2024-12-09 23:35:32.967657] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:53:52.448 [2024-12-09 23:35:32.967736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:53:52.448 [2024-12-09 23:35:32.967748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:53:52.448 [2024-12-09 23:35:32.967757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:53:52.448 [2024-12-09 23:35:32.967764] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:53:52.448 [2024-12-09 23:35:32.967771] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:53:52.448 [2024-12-09 23:35:32.967777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:53:52.448 [2024-12-09 23:35:32.967783] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:53:52.448 [2024-12-09 23:35:32.967788] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:53:52.448 [2024-12-09 23:35:32.967794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:53:52.448 [2024-12-09 23:35:32.967802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.448 [2024-12-09 23:35:32.967808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:53:52.448 [2024-12-09 23:35:32.967814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:53:52.448 [2024-12-09 23:35:32.967819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.967883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.448 [2024-12-09 23:35:32.967890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:53:52.448 [2024-12-09 23:35:32.967895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:53:52.448 [2024-12-09 23:35:32.967900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.448 [2024-12-09 23:35:32.967974] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:53:52.448 [2024-12-09 23:35:32.968001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:53:52.448 [2024-12-09 23:35:32.968007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:53:52.448 [2024-12-09 23:35:32.968026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:53:52.448 [2024-12-09 23:35:32.968038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:53:52.448 [2024-12-09 23:35:32.968043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:53:52.448 [2024-12-09 23:35:32.968048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:53:52.448 [2024-12-09 23:35:32.968058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:53:52.448 [2024-12-09 23:35:32.968063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:53:52.448 [2024-12-09 23:35:32.968073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:53:52.448 [2024-12-09 23:35:32.968078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:53:52.448 [2024-12-09 23:35:32.968088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:53:52.448 [2024-12-09 23:35:32.968093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:53:52.448 [2024-12-09 23:35:32.968103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:53:52.448 [2024-12-09 23:35:32.968122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:53:52.448 [2024-12-09 23:35:32.968137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:53:52.448 [2024-12-09 23:35:32.968151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:53:52.448 [2024-12-09 23:35:32.968165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:53:52.448 [2024-12-09 23:35:32.968180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:53:52.448 [2024-12-09 23:35:32.968196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:53:52.448 [2024-12-09 23:35:32.968211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:53:52.448 [2024-12-09 23:35:32.968216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968221] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:53:52.448 [2024-12-09 23:35:32.968226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:53:52.448 [2024-12-09 23:35:32.968232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:52.448 [2024-12-09 23:35:32.968242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:53:52.448 [2024-12-09 23:35:32.968247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:53:52.448 [2024-12-09 23:35:32.968252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:53:52.448 [2024-12-09 23:35:32.968257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:53:52.448 [2024-12-09 23:35:32.968262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:53:52.448 [2024-12-09 23:35:32.968267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:53:52.448 [2024-12-09 23:35:32.968273] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:53:52.448 [2024-12-09 23:35:32.968281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:53:52.448 [2024-12-09 23:35:32.968292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:53:52.448 [2024-12-09 23:35:32.968308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:53:52.448 [2024-12-09 23:35:32.968313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:53:52.448 [2024-12-09 23:35:32.968319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:53:52.448 [2024-12-09 23:35:32.968324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:53:52.448 [2024-12-09 23:35:32.968350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:53:52.449 [2024-12-09 23:35:32.968356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:53:52.449 [2024-12-09 23:35:32.968362] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:53:52.449 [2024-12-09 23:35:32.968368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:52.449 [2024-12-09 23:35:32.968376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:52.449 [2024-12-09 23:35:32.968382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:53:52.449 [2024-12-09 23:35:32.968387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:53:52.449 [2024-12-09 23:35:32.968392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:53:52.449 [2024-12-09 23:35:32.968398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:32.968403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:53:52.449 [2024-12-09 23:35:32.968409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.476 ms 00:53:52.449 [2024-12-09 23:35:32.968414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:32.987534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:32.987559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:52.449 [2024-12-09 23:35:32.987568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.082 ms 00:53:52.449 [2024-12-09 23:35:32.987573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:32.987604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:32.987610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:52.449 [2024-12-09 23:35:32.987616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:53:52.449 [2024-12-09 23:35:32.987622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.012230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.012256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:52.449 [2024-12-09 23:35:33.012265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.566 ms 00:53:52.449 [2024-12-09 23:35:33.012271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.012293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.012299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:52.449 [2024-12-09 23:35:33.012306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:52.449 [2024-12-09 23:35:33.012313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.012388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.012396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:52.449 [2024-12-09 23:35:33.012402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:53:52.449 [2024-12-09 23:35:33.012407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.012438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.012445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:52.449 [2024-12-09 23:35:33.012451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:53:52.449 [2024-12-09 23:35:33.012456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.024132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.024155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:52.449 [2024-12-09 23:35:33.024163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.657 ms 00:53:52.449 [2024-12-09 23:35:33.024169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.024246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.024254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:53:52.449 [2024-12-09 23:35:33.024261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:52.449 [2024-12-09 23:35:33.024266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.050573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.050604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:53:52.449 [2024-12-09 23:35:33.050614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.291 ms 00:53:52.449 [2024-12-09 23:35:33.050621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.449 [2024-12-09 23:35:33.057743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.449 [2024-12-09 23:35:33.057767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:52.449 [2024-12-09 23:35:33.057780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:53:52.449 [2024-12-09 23:35:33.057786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.101815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.101854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:53:52.747 [2024-12-09 23:35:33.101865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.984 ms 00:53:52.747 [2024-12-09 23:35:33.101871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.102013] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:53:52.747 [2024-12-09 23:35:33.102092] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:53:52.747 [2024-12-09 23:35:33.102160] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:53:52.747 [2024-12-09 23:35:33.102232] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:53:52.747 [2024-12-09 23:35:33.102239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.102245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:53:52.747 [2024-12-09 23:35:33.102252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:53:52.747 [2024-12-09 23:35:33.102258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.102303] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:53:52.747 [2024-12-09 23:35:33.102312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.102321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:53:52.747 [2024-12-09 23:35:33.102327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:53:52.747 [2024-12-09 23:35:33.102333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.113892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.113922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:53:52.747 [2024-12-09 23:35:33.113930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.542 ms 00:53:52.747 [2024-12-09 23:35:33.113936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.120484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.120507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:53:52.747 [2024-12-09 23:35:33.120515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:53:52.747 [2024-12-09 23:35:33.120521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:52.747 [2024-12-09 23:35:33.120585] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:53:52.747 [2024-12-09 23:35:33.120703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:52.747 [2024-12-09 23:35:33.120717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:52.747 [2024-12-09 23:35:33.120724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.119 ms 00:53:52.747 [2024-12-09 23:35:33.120730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.005 [2024-12-09 23:35:33.551863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.005 [2024-12-09 23:35:33.551921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:53.005 [2024-12-09 23:35:33.551935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 430.453 ms 00:53:53.005 [2024-12-09 23:35:33.551944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.005 [2024-12-09 23:35:33.556412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.005 [2024-12-09 23:35:33.556444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:53.005 [2024-12-09 23:35:33.556454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:53:53.005 [2024-12-09 23:35:33.556461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.005 [2024-12-09 23:35:33.557025] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:53:53.005 [2024-12-09 23:35:33.557054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.005 [2024-12-09 23:35:33.557062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:53.005 [2024-12-09 23:35:33.557071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:53:53.005 [2024-12-09 23:35:33.557086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.005 [2024-12-09 23:35:33.557116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.005 [2024-12-09 23:35:33.557126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:53.005 [2024-12-09 23:35:33.557134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:53.005 [2024-12-09 23:35:33.557145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.005 [2024-12-09 23:35:33.557178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 436.588 ms, result 0 00:53:53.005 [2024-12-09 23:35:33.557214] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:53:53.005 [2024-12-09 23:35:33.557311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.005 [2024-12-09 23:35:33.557321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:53.005 [2024-12-09 23:35:33.557329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:53:53.005 [2024-12-09 23:35:33.557336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.986371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.986427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:53.571 [2024-12-09 23:35:33.986456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 428.120 ms 00:53:53.571 [2024-12-09 23:35:33.986464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.990105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.990134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:53.571 [2024-12-09 23:35:33.990143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.735 ms 00:53:53.571 [2024-12-09 23:35:33.990151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.990403] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:53:53.571 [2024-12-09 23:35:33.990426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.990434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:53.571 [2024-12-09 23:35:33.990442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:53:53.571 [2024-12-09 23:35:33.990450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.990476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.990484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:53.571 [2024-12-09 23:35:33.990492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:53.571 [2024-12-09 23:35:33.990498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.990532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 433.313 ms, result 0 00:53:53.571 [2024-12-09 23:35:33.990569] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:53:53.571 [2024-12-09 23:35:33.990584] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:53:53.571 [2024-12-09 23:35:33.990593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.990601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:53:53.571 [2024-12-09 23:35:33.990609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 870.023 ms 00:53:53.571 [2024-12-09 23:35:33.990616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:33.990645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:33.990656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:53:53.571 [2024-12-09 23:35:33.990664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:53.571 [2024-12-09 23:35:33.990671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.001245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:53.571 [2024-12-09 23:35:34.001338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:34.001348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:53.571 [2024-12-09 23:35:34.001357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.651 ms 00:53:53.571 [2024-12-09 23:35:34.001364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.002059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:34.002078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:53:53.571 [2024-12-09 23:35:34.002090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:53:53.571 [2024-12-09 23:35:34.002097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.004307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:34.004324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:53:53.571 [2024-12-09 23:35:34.004334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.193 ms 00:53:53.571 [2024-12-09 23:35:34.004342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.004378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:34.004387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:53:53.571 [2024-12-09 23:35:34.004395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:53.571 [2024-12-09 23:35:34.004404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.004500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.571 [2024-12-09 23:35:34.004514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:53.571 [2024-12-09 23:35:34.004523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:53:53.571 [2024-12-09 23:35:34.004530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.571 [2024-12-09 23:35:34.004548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.572 [2024-12-09 23:35:34.004556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:53.572 [2024-12-09 23:35:34.004564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:53.572 [2024-12-09 23:35:34.004571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.572 [2024-12-09 23:35:34.004601] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:53:53.572 [2024-12-09 23:35:34.004610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.572 [2024-12-09 23:35:34.004617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:53:53.572 [2024-12-09 23:35:34.004625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:53:53.572 [2024-12-09 23:35:34.004632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.572 [2024-12-09 23:35:34.004680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:53.572 [2024-12-09 23:35:34.004688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:53.572 [2024-12-09 23:35:34.004696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:53:53.572 [2024-12-09 23:35:34.004708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:53.572 [2024-12-09 23:35:34.005634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1060.788 ms, result 0 00:53:53.572 [2024-12-09 23:35:34.017938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:53.572 [2024-12-09 23:35:34.033932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:53.572 [2024-12-09 23:35:34.042059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:53.830 23:35:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:53.830 23:35:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:53.830 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:53.830 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:53:53.830 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:54.088 Validate MD5 checksum, iteration 1 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:54.088 23:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:54.088 [2024-12-09 23:35:34.531779] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:53:54.088 [2024-12-09 23:35:34.531882] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83597 ] 00:53:54.088 [2024-12-09 23:35:34.691692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:54.346 [2024-12-09 23:35:34.786904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:55.722  [2024-12-09T23:35:36.923Z] Copying: 729/1024 [MB] (729 MBps) [2024-12-09T23:35:38.822Z] Copying: 1024/1024 [MB] (average 720 MBps) 00:53:58.186 00:53:58.186 23:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:53:58.186 23:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:00.084 Validate MD5 checksum, iteration 2 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d88fd63d2c50505fd6d882bc4a4c98f3 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d88fd63d2c50505fd6d882bc4a4c98f3 != \d\8\8\f\d\6\3\d\2\c\5\0\5\0\5\f\d\6\d\8\8\2\b\c\4\a\4\c\9\8\f\3 ]] 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:54:00.084 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:00.085 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:54:00.085 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:54:00.085 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:54:00.085 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:54:00.085 23:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:00.085 [2024-12-09 23:35:40.535846] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:54:00.085 [2024-12-09 23:35:40.535965] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83664 ] 00:54:00.085 [2024-12-09 23:35:40.691581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:00.343 [2024-12-09 23:35:40.772708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:01.716  [2024-12-09T23:35:42.611Z] Copying: 759/1024 [MB] (759 MBps) [2024-12-09T23:35:44.511Z] Copying: 1024/1024 [MB] (average 779 MBps) 00:54:03.875 00:54:03.875 23:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:54:03.875 23:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=281ea6fc5ff6c8d896d2013d8b50459c 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 281ea6fc5ff6c8d896d2013d8b50459c != \2\8\1\e\a\6\f\c\5\f\f\6\c\8\d\8\9\6\d\2\0\1\3\d\8\b\5\0\4\5\9\c ]] 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83567 ]] 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83567 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83567 ']' 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83567 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83567 00:54:05.801 killing process with pid 83567 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83567' 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83567 00:54:05.801 23:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83567 00:54:06.382 [2024-12-09 23:35:46.906534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:54:06.382 [2024-12-09 23:35:46.919272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.382 [2024-12-09 23:35:46.919311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:54:06.382 [2024-12-09 23:35:46.919322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:54:06.382 [2024-12-09 23:35:46.919328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.919345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:54:06.383 [2024-12-09 23:35:46.921530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.921560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:54:06.383 [2024-12-09 23:35:46.921568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.173 ms 00:54:06.383 [2024-12-09 23:35:46.921575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.921756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.921767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:54:06.383 [2024-12-09 23:35:46.921778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:54:06.383 [2024-12-09 23:35:46.921787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.922783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.922796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:54:06.383 [2024-12-09 23:35:46.922804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.980 ms 00:54:06.383 [2024-12-09 23:35:46.922814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.923734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.923755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:54:06.383 [2024-12-09 23:35:46.923763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:54:06.383 [2024-12-09 23:35:46.923770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.931094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.931120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:54:06.383 [2024-12-09 23:35:46.931132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.287 ms 00:54:06.383 [2024-12-09 23:35:46.931139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.935160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.935186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:54:06.383 [2024-12-09 23:35:46.935194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.992 ms 00:54:06.383 [2024-12-09 23:35:46.935200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.935261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.935268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:54:06.383 [2024-12-09 23:35:46.935275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:54:06.383 [2024-12-09 23:35:46.935284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.942224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.942251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:54:06.383 [2024-12-09 23:35:46.942257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.928 ms 00:54:06.383 [2024-12-09 23:35:46.942263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.949454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.949480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:54:06.383 [2024-12-09 23:35:46.949487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.166 ms 00:54:06.383 [2024-12-09 23:35:46.949493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.956623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.956650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:54:06.383 [2024-12-09 23:35:46.956657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.104 ms 00:54:06.383 [2024-12-09 23:35:46.956663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.963799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.963827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:54:06.383 [2024-12-09 23:35:46.963834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.090 ms 00:54:06.383 [2024-12-09 23:35:46.963840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.963866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:54:06.383 [2024-12-09 23:35:46.963877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:54:06.383 [2024-12-09 23:35:46.963884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:54:06.383 [2024-12-09 23:35:46.963891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:54:06.383 [2024-12-09 23:35:46.963897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:54:06.383 [2024-12-09 23:35:46.963991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:54:06.383 [2024-12-09 23:35:46.963997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a267ee6b-6a37-4ef3-b054-685c3a12fe75 00:54:06.383 [2024-12-09 23:35:46.964003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:54:06.383 [2024-12-09 23:35:46.964008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:54:06.383 [2024-12-09 23:35:46.964014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:54:06.383 [2024-12-09 23:35:46.964019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:54:06.383 [2024-12-09 23:35:46.964024] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:54:06.383 [2024-12-09 23:35:46.964030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:54:06.383 [2024-12-09 23:35:46.964040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:54:06.383 [2024-12-09 23:35:46.964045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:54:06.383 [2024-12-09 23:35:46.964050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:54:06.383 [2024-12-09 23:35:46.964055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.964061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:54:06.383 [2024-12-09 23:35:46.964067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:54:06.383 [2024-12-09 23:35:46.964073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.973835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.973861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:54:06.383 [2024-12-09 23:35:46.973870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.749 ms 00:54:06.383 [2024-12-09 23:35:46.973875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:46.974161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:06.383 [2024-12-09 23:35:46.974172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:54:06.383 [2024-12-09 23:35:46.974179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:54:06.383 [2024-12-09 23:35:46.974184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:47.007852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.383 [2024-12-09 23:35:47.007884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:54:06.383 [2024-12-09 23:35:47.007892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.383 [2024-12-09 23:35:47.007902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:47.007928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.383 [2024-12-09 23:35:47.007935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:54:06.383 [2024-12-09 23:35:47.007941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.383 [2024-12-09 23:35:47.007947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:47.008016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.383 [2024-12-09 23:35:47.008025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:54:06.383 [2024-12-09 23:35:47.008032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.383 [2024-12-09 23:35:47.008037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.383 [2024-12-09 23:35:47.008054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.383 [2024-12-09 23:35:47.008060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:54:06.383 [2024-12-09 23:35:47.008066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.383 [2024-12-09 23:35:47.008072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.069047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.069084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:54:06.642 [2024-12-09 23:35:47.069092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.069098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.117678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.117714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:54:06.642 [2024-12-09 23:35:47.117722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.117728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.117780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.117787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:54:06.642 [2024-12-09 23:35:47.117793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.117799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.117845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.117860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:54:06.642 [2024-12-09 23:35:47.117866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.117872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.117943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.117950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:54:06.642 [2024-12-09 23:35:47.117956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.117961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.118000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.118008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:54:06.642 [2024-12-09 23:35:47.118016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.118022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.118051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.118058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:54:06.642 [2024-12-09 23:35:47.118064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.118069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.118101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:06.642 [2024-12-09 23:35:47.118111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:54:06.642 [2024-12-09 23:35:47.118117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:06.642 [2024-12-09 23:35:47.118123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:06.642 [2024-12-09 23:35:47.118213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 198.918 ms, result 0 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:54:07.208 Remove shared memory files 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83327 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:54:07.208 00:54:07.208 real 1m28.128s 00:54:07.208 user 1m59.894s 00:54:07.208 sys 0m19.298s 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:07.208 23:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:54:07.208 ************************************ 00:54:07.208 END TEST ftl_upgrade_shutdown 00:54:07.208 ************************************ 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@14 -- # killprocess 75089 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@954 -- # '[' -z 75089 ']' 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@958 -- # kill -0 75089 00:54:07.208 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75089) - No such process 00:54:07.208 Process with pid 75089 is not found 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75089 is not found' 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83776 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83776 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@835 -- # '[' -z 83776 ']' 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:07.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:07.208 23:35:47 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:07.208 23:35:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:54:07.466 [2024-12-09 23:35:47.891227] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:54:07.466 [2024-12-09 23:35:47.891961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83776 ] 00:54:07.466 [2024-12-09 23:35:48.047233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:07.724 [2024-12-09 23:35:48.124294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:08.290 23:35:48 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:08.290 23:35:48 ftl -- common/autotest_common.sh@868 -- # return 0 00:54:08.290 23:35:48 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:54:08.548 nvme0n1 00:54:08.548 23:35:48 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:54:08.548 23:35:48 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:54:08.548 23:35:48 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:54:08.548 23:35:49 ftl -- ftl/common.sh@28 -- # stores=a5a8afb9-fa23-4070-87fe-0c3a36ddd106 00:54:08.548 23:35:49 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:54:08.548 23:35:49 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5a8afb9-fa23-4070-87fe-0c3a36ddd106 00:54:08.807 23:35:49 ftl -- ftl/ftl.sh@23 -- # killprocess 83776 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@954 -- # '[' -z 83776 ']' 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@958 -- # kill -0 83776 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@959 -- # uname 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83776 00:54:08.807 killing process with pid 83776 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83776' 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@973 -- # kill 83776 00:54:08.807 23:35:49 ftl -- common/autotest_common.sh@978 -- # wait 83776 00:54:10.180 23:35:50 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:54:10.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:10.180 Waiting for block devices as requested 00:54:10.438 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:54:10.438 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:54:10.438 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:54:10.438 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:54:15.699 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:54:15.699 23:35:56 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:54:15.699 23:35:56 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:54:15.699 Remove shared memory files 00:54:15.699 23:35:56 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:54:15.699 23:35:56 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:54:15.699 23:35:56 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:54:15.699 23:35:56 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:54:15.699 23:35:56 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:54:15.699 ************************************ 00:54:15.699 END TEST ftl 00:54:15.699 ************************************ 00:54:15.699 00:54:15.699 real 12m57.996s 00:54:15.700 user 15m21.195s 00:54:15.700 sys 1m16.312s 00:54:15.700 23:35:56 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:15.700 23:35:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:54:15.700 23:35:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:54:15.700 23:35:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:54:15.700 23:35:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:54:15.700 23:35:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:54:15.700 23:35:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:54:15.700 23:35:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:54:15.700 23:35:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:54:15.700 23:35:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:54:15.700 23:35:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:54:15.700 23:35:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:54:15.700 23:35:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:15.700 23:35:56 -- common/autotest_common.sh@10 -- # set +x 00:54:15.700 23:35:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:54:15.700 23:35:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:54:15.700 23:35:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:54:15.700 23:35:56 -- common/autotest_common.sh@10 -- # set +x 00:54:17.101 INFO: APP EXITING 00:54:17.101 INFO: killing all VMs 00:54:17.101 INFO: killing vhost app 00:54:17.101 INFO: EXIT DONE 00:54:17.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:17.359 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:54:17.359 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:54:17.359 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:54:17.359 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:54:17.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:18.182 Cleaning 00:54:18.182 Removing: /var/run/dpdk/spdk0/config 00:54:18.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:54:18.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:54:18.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:54:18.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:54:18.182 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:54:18.182 Removing: /var/run/dpdk/spdk0/hugepage_info 00:54:18.182 Removing: /var/run/dpdk/spdk0 00:54:18.182 Removing: /var/run/dpdk/spdk_pid56936 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57138 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57351 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57444 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57483 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57600 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57618 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57812 00:54:18.182 Removing: /var/run/dpdk/spdk_pid57905 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58001 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58112 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58209 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58243 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58285 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58350 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58449 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58881 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58945 00:54:18.182 Removing: /var/run/dpdk/spdk_pid58997 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59013 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59115 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59130 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59222 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59238 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59291 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59309 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59362 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59380 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59540 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59577 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59660 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59832 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59916 00:54:18.182 Removing: /var/run/dpdk/spdk_pid59953 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60386 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60484 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60595 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60650 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60680 00:54:18.182 Removing: /var/run/dpdk/spdk_pid60754 00:54:18.182 Removing: /var/run/dpdk/spdk_pid61380 00:54:18.182 Removing: /var/run/dpdk/spdk_pid61417 00:54:18.182 Removing: /var/run/dpdk/spdk_pid61902 00:54:18.182 Removing: /var/run/dpdk/spdk_pid62000 00:54:18.182 Removing: /var/run/dpdk/spdk_pid62115 00:54:18.182 Removing: /var/run/dpdk/spdk_pid62162 00:54:18.182 Removing: /var/run/dpdk/spdk_pid62188 00:54:18.182 Removing: /var/run/dpdk/spdk_pid62213 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64058 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64190 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64199 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64211 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64258 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64262 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64274 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64319 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64323 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64335 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64380 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64384 00:54:18.182 Removing: /var/run/dpdk/spdk_pid64396 00:54:18.182 Removing: /var/run/dpdk/spdk_pid65781 00:54:18.182 Removing: /var/run/dpdk/spdk_pid65884 00:54:18.182 Removing: /var/run/dpdk/spdk_pid67293 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69020 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69094 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69169 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69279 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69365 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69461 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69538 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69613 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69723 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69809 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69905 00:54:18.182 Removing: /var/run/dpdk/spdk_pid69973 00:54:18.182 Removing: /var/run/dpdk/spdk_pid70049 00:54:18.182 Removing: /var/run/dpdk/spdk_pid70153 00:54:18.182 Removing: /var/run/dpdk/spdk_pid70239 00:54:18.182 Removing: /var/run/dpdk/spdk_pid70340 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70414 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70495 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70599 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70691 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70791 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70861 00:54:18.183 Removing: /var/run/dpdk/spdk_pid70935 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71009 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71090 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71199 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71284 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71387 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71457 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71531 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71612 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71686 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71789 00:54:18.183 Removing: /var/run/dpdk/spdk_pid71880 00:54:18.183 Removing: /var/run/dpdk/spdk_pid72024 00:54:18.183 Removing: /var/run/dpdk/spdk_pid72308 00:54:18.183 Removing: /var/run/dpdk/spdk_pid72346 00:54:18.183 Removing: /var/run/dpdk/spdk_pid72802 00:54:18.183 Removing: /var/run/dpdk/spdk_pid72985 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73099 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73215 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73271 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73291 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73597 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73651 00:54:18.183 Removing: /var/run/dpdk/spdk_pid73735 00:54:18.183 Removing: /var/run/dpdk/spdk_pid74138 00:54:18.440 Removing: /var/run/dpdk/spdk_pid74278 00:54:18.440 Removing: /var/run/dpdk/spdk_pid75089 00:54:18.440 Removing: /var/run/dpdk/spdk_pid75221 00:54:18.440 Removing: /var/run/dpdk/spdk_pid75407 00:54:18.440 Removing: /var/run/dpdk/spdk_pid75521 00:54:18.440 Removing: /var/run/dpdk/spdk_pid75895 00:54:18.440 Removing: /var/run/dpdk/spdk_pid76164 00:54:18.440 Removing: /var/run/dpdk/spdk_pid76506 00:54:18.440 Removing: /var/run/dpdk/spdk_pid76699 00:54:18.440 Removing: /var/run/dpdk/spdk_pid76884 00:54:18.440 Removing: /var/run/dpdk/spdk_pid76937 00:54:18.440 Removing: /var/run/dpdk/spdk_pid77119 00:54:18.440 Removing: /var/run/dpdk/spdk_pid77144 00:54:18.440 Removing: /var/run/dpdk/spdk_pid77191 00:54:18.440 Removing: /var/run/dpdk/spdk_pid77362 00:54:18.440 Removing: /var/run/dpdk/spdk_pid77594 00:54:18.440 Removing: /var/run/dpdk/spdk_pid78376 00:54:18.440 Removing: /var/run/dpdk/spdk_pid79006 00:54:18.440 Removing: /var/run/dpdk/spdk_pid79657 00:54:18.440 Removing: /var/run/dpdk/spdk_pid80481 00:54:18.440 Removing: /var/run/dpdk/spdk_pid80632 00:54:18.440 Removing: /var/run/dpdk/spdk_pid80720 00:54:18.440 Removing: /var/run/dpdk/spdk_pid81203 00:54:18.440 Removing: /var/run/dpdk/spdk_pid81269 00:54:18.440 Removing: /var/run/dpdk/spdk_pid81973 00:54:18.440 Removing: /var/run/dpdk/spdk_pid82343 00:54:18.440 Removing: /var/run/dpdk/spdk_pid82786 00:54:18.440 Removing: /var/run/dpdk/spdk_pid82903 00:54:18.440 Removing: /var/run/dpdk/spdk_pid82950 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83005 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83061 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83129 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83327 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83400 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83476 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83567 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83597 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83664 00:54:18.440 Removing: /var/run/dpdk/spdk_pid83776 00:54:18.440 Clean 00:54:18.440 23:35:58 -- common/autotest_common.sh@1453 -- # return 0 00:54:18.440 23:35:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:54:18.440 23:35:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:18.440 23:35:58 -- common/autotest_common.sh@10 -- # set +x 00:54:18.440 23:35:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:54:18.440 23:35:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:18.440 23:35:58 -- common/autotest_common.sh@10 -- # set +x 00:54:18.440 23:35:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:54:18.440 23:35:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:54:18.440 23:35:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:54:18.440 23:35:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:54:18.440 23:35:59 -- spdk/autotest.sh@398 -- # hostname 00:54:18.440 23:35:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:54:18.698 geninfo: WARNING: invalid characters removed from testname! 00:54:45.236 23:36:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:45.236 23:36:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:47.138 23:36:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:49.679 23:36:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:51.063 23:36:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:52.978 23:36:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:54.886 23:36:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:54:54.886 23:36:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:54:54.886 23:36:35 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:54:54.886 23:36:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:54:54.886 23:36:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:54:54.886 23:36:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:54:54.886 + [[ -n 5020 ]] 00:54:54.886 + sudo kill 5020 00:54:54.895 [Pipeline] } 00:54:54.911 [Pipeline] // timeout 00:54:54.918 [Pipeline] } 00:54:54.935 [Pipeline] // stage 00:54:54.942 [Pipeline] } 00:54:54.957 [Pipeline] // catchError 00:54:54.966 [Pipeline] stage 00:54:54.968 [Pipeline] { (Stop VM) 00:54:54.980 [Pipeline] sh 00:54:55.258 + vagrant halt 00:54:57.785 ==> default: Halting domain... 00:55:03.067 [Pipeline] sh 00:55:03.350 + vagrant destroy -f 00:55:05.929 ==> default: Removing domain... 00:55:06.881 [Pipeline] sh 00:55:07.165 + mv output /var/jenkins/workspace/nvme-vg-autotest_4/output 00:55:07.176 [Pipeline] } 00:55:07.190 [Pipeline] // stage 00:55:07.196 [Pipeline] } 00:55:07.209 [Pipeline] // dir 00:55:07.214 [Pipeline] } 00:55:07.227 [Pipeline] // wrap 00:55:07.232 [Pipeline] } 00:55:07.244 [Pipeline] // catchError 00:55:07.252 [Pipeline] stage 00:55:07.254 [Pipeline] { (Epilogue) 00:55:07.266 [Pipeline] sh 00:55:07.549 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:55:14.145 [Pipeline] catchError 00:55:14.147 [Pipeline] { 00:55:14.159 [Pipeline] sh 00:55:14.446 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:55:14.446 Artifacts sizes are good 00:55:14.457 [Pipeline] } 00:55:14.470 [Pipeline] // catchError 00:55:14.480 [Pipeline] archiveArtifacts 00:55:14.487 Archiving artifacts 00:55:14.581 [Pipeline] cleanWs 00:55:14.593 [WS-CLEANUP] Deleting project workspace... 00:55:14.593 [WS-CLEANUP] Deferred wipeout is used... 00:55:14.601 [WS-CLEANUP] done 00:55:14.603 [Pipeline] } 00:55:14.633 [Pipeline] // stage 00:55:14.643 [Pipeline] } 00:55:14.671 [Pipeline] // node 00:55:14.682 [Pipeline] End of Pipeline 00:55:14.720 Finished: SUCCESS