00:00:00.001 Started by upstream project "autotest-per-patch" build number 132561 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.214 Fetching changes from the remote Git repository 00:00:00.217 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.290 Using shallow fetch with depth 1 00:00:00.290 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.290 > git --version # timeout=10 00:00:00.346 > git --version # 'git version 2.39.2' 00:00:00.346 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.379 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.379 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.074 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.087 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.101 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.101 > git config core.sparsecheckout # timeout=10 00:00:04.115 > git read-tree -mu HEAD # timeout=10 00:00:04.133 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.163 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.163 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.392 [Pipeline] Start of Pipeline 00:00:04.406 [Pipeline] library 00:00:04.407 Loading library shm_lib@master 00:00:04.407 Library shm_lib@master is cached. Copying from home. 00:00:04.425 [Pipeline] node 00:00:19.452 Still waiting to schedule task 00:00:19.453 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:24.189 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:12:24.190 [Pipeline] { 00:12:24.203 [Pipeline] catchError 00:12:24.205 [Pipeline] { 00:12:24.221 [Pipeline] wrap 00:12:24.235 [Pipeline] { 00:12:24.248 [Pipeline] stage 00:12:24.251 [Pipeline] { (Prologue) 00:12:24.273 [Pipeline] echo 00:12:24.275 Node: VM-host-SM38 00:12:24.282 [Pipeline] cleanWs 00:12:24.291 [WS-CLEANUP] Deleting project workspace... 00:12:24.291 [WS-CLEANUP] Deferred wipeout is used... 00:12:24.297 [WS-CLEANUP] done 00:12:24.529 [Pipeline] setCustomBuildProperty 00:12:24.628 [Pipeline] httpRequest 00:12:24.969 [Pipeline] echo 00:12:24.971 Sorcerer 10.211.164.20 is alive 00:12:24.983 [Pipeline] retry 00:12:24.986 [Pipeline] { 00:12:25.002 [Pipeline] httpRequest 00:12:25.007 HttpMethod: GET 00:12:25.007 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:12:25.008 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:12:25.009 Response Code: HTTP/1.1 200 OK 00:12:25.009 Success: Status code 200 is in the accepted range: 200,404 00:12:25.010 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:12:25.443 [Pipeline] } 00:12:25.462 [Pipeline] // retry 00:12:25.470 [Pipeline] sh 00:12:25.750 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:12:25.793 [Pipeline] httpRequest 00:12:26.139 [Pipeline] echo 00:12:26.141 Sorcerer 10.211.164.20 is alive 00:12:26.153 [Pipeline] retry 00:12:26.155 [Pipeline] { 00:12:26.171 [Pipeline] httpRequest 00:12:26.177 HttpMethod: GET 00:12:26.177 URL: http://10.211.164.20/packages/spdk_78decfef624b951a4cdd71e3f59de847a98823c5.tar.gz 00:12:26.178 Sending request to url: http://10.211.164.20/packages/spdk_78decfef624b951a4cdd71e3f59de847a98823c5.tar.gz 00:12:26.180 Response Code: HTTP/1.1 200 OK 00:12:26.180 Success: Status code 200 is in the accepted range: 200,404 00:12:26.181 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_78decfef624b951a4cdd71e3f59de847a98823c5.tar.gz 00:12:30.107 [Pipeline] } 00:12:30.126 [Pipeline] // retry 00:12:30.133 [Pipeline] sh 00:12:30.435 + tar --no-same-owner -xf spdk_78decfef624b951a4cdd71e3f59de847a98823c5.tar.gz 00:12:33.741 [Pipeline] sh 00:12:34.018 + git -C spdk log --oneline -n5 00:12:34.018 78decfef6 bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:12:34.018 a640d9f98 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:12:34.018 ae1917872 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:12:34.018 ff68c6e68 nvmf: Expose DIF type of namespace to host again 00:12:34.018 dd10a9655 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:12:34.037 [Pipeline] writeFile 00:12:34.053 [Pipeline] sh 00:12:34.334 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:34.345 [Pipeline] sh 00:12:34.623 + cat autorun-spdk.conf 00:12:34.623 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:34.623 SPDK_TEST_NVME=1 00:12:34.623 SPDK_TEST_FTL=1 00:12:34.623 SPDK_TEST_ISAL=1 00:12:34.623 SPDK_RUN_ASAN=1 00:12:34.623 SPDK_RUN_UBSAN=1 00:12:34.623 SPDK_TEST_XNVME=1 00:12:34.623 SPDK_TEST_NVME_FDP=1 00:12:34.623 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:34.630 RUN_NIGHTLY=0 00:12:34.632 [Pipeline] } 00:12:34.646 [Pipeline] // stage 00:12:34.662 [Pipeline] stage 00:12:34.665 [Pipeline] { (Run VM) 00:12:34.677 [Pipeline] sh 00:12:34.955 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:34.955 + echo 'Start stage prepare_nvme.sh' 00:12:34.955 Start stage prepare_nvme.sh 00:12:34.955 + [[ -n 7 ]] 00:12:34.955 + disk_prefix=ex7 00:12:34.955 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:12:34.955 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:12:34.955 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:12:34.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:34.955 ++ SPDK_TEST_NVME=1 00:12:34.955 ++ SPDK_TEST_FTL=1 00:12:34.955 ++ SPDK_TEST_ISAL=1 00:12:34.956 ++ SPDK_RUN_ASAN=1 00:12:34.956 ++ SPDK_RUN_UBSAN=1 00:12:34.956 ++ SPDK_TEST_XNVME=1 00:12:34.956 ++ SPDK_TEST_NVME_FDP=1 00:12:34.956 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:34.956 ++ RUN_NIGHTLY=0 00:12:34.956 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:12:34.956 + nvme_files=() 00:12:34.956 + declare -A nvme_files 00:12:34.956 + backend_dir=/var/lib/libvirt/images/backends 00:12:34.956 + nvme_files['nvme.img']=5G 00:12:34.956 + nvme_files['nvme-cmb.img']=5G 00:12:34.956 + nvme_files['nvme-multi0.img']=4G 00:12:34.956 + nvme_files['nvme-multi1.img']=4G 00:12:34.956 + nvme_files['nvme-multi2.img']=4G 00:12:34.956 + nvme_files['nvme-openstack.img']=8G 00:12:34.956 + nvme_files['nvme-zns.img']=5G 00:12:34.956 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:34.956 + (( SPDK_TEST_FTL == 1 )) 00:12:34.956 + nvme_files["nvme-ftl.img"]=6G 00:12:34.956 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:34.956 + nvme_files["nvme-fdp.img"]=1G 00:12:34.956 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:34.956 + for nvme in "${!nvme_files[@]}" 00:12:34.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:12:34.956 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:34.956 + for nvme in "${!nvme_files[@]}" 00:12:34.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:12:35.214 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:12:35.214 + for nvme in "${!nvme_files[@]}" 00:12:35.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:12:35.214 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:35.214 + for nvme in "${!nvme_files[@]}" 00:12:35.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:12:35.214 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:35.214 + for nvme in "${!nvme_files[@]}" 00:12:35.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:12:35.845 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:35.845 + for nvme in "${!nvme_files[@]}" 00:12:35.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:12:35.845 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:35.845 + for nvme in "${!nvme_files[@]}" 00:12:35.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:12:35.845 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:35.845 + for nvme in "${!nvme_files[@]}" 00:12:35.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:12:35.845 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:12:35.845 + for nvme in "${!nvme_files[@]}" 00:12:35.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:12:36.778 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:36.778 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:12:36.778 + echo 'End stage prepare_nvme.sh' 00:12:36.778 End stage prepare_nvme.sh 00:12:36.788 [Pipeline] sh 00:12:37.066 + DISTRO=fedora39 00:12:37.067 + CPUS=10 00:12:37.067 + RAM=12288 00:12:37.067 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:37.067 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:12:37.067 00:12:37.067 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:12:37.067 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:12:37.067 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:12:37.067 HELP=0 00:12:37.067 DRY_RUN=0 00:12:37.067 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:12:37.067 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:12:37.067 NVME_AUTO_CREATE=0 00:12:37.067 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:12:37.067 NVME_CMB=,,,, 00:12:37.067 NVME_PMR=,,,, 00:12:37.067 NVME_ZNS=,,,, 00:12:37.067 NVME_MS=true,,,, 00:12:37.067 NVME_FDP=,,,on, 00:12:37.067 SPDK_VAGRANT_DISTRO=fedora39 00:12:37.067 SPDK_VAGRANT_VMCPU=10 00:12:37.067 SPDK_VAGRANT_VMRAM=12288 00:12:37.067 SPDK_VAGRANT_PROVIDER=libvirt 00:12:37.067 SPDK_VAGRANT_HTTP_PROXY= 00:12:37.067 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:37.067 SPDK_OPENSTACK_NETWORK=0 00:12:37.067 VAGRANT_PACKAGE_BOX=0 00:12:37.067 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:12:37.067 FORCE_DISTRO=true 00:12:37.067 VAGRANT_BOX_VERSION= 00:12:37.067 EXTRA_VAGRANTFILES= 00:12:37.067 NIC_MODEL=e1000 00:12:37.067 00:12:37.067 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:12:37.067 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:12:39.613 Bringing machine 'default' up with 'libvirt' provider... 00:12:40.184 ==> default: Creating image (snapshot of base box volume). 00:12:40.184 ==> default: Creating domain with the following settings... 00:12:40.184 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732682086_df23b4a6cd77aa98c025 00:12:40.184 ==> default: -- Domain type: kvm 00:12:40.184 ==> default: -- Cpus: 10 00:12:40.184 ==> default: -- Feature: acpi 00:12:40.184 ==> default: -- Feature: apic 00:12:40.184 ==> default: -- Feature: pae 00:12:40.184 ==> default: -- Memory: 12288M 00:12:40.184 ==> default: -- Memory Backing: hugepages: 00:12:40.184 ==> default: -- Management MAC: 00:12:40.184 ==> default: -- Loader: 00:12:40.184 ==> default: -- Nvram: 00:12:40.184 ==> default: -- Base box: spdk/fedora39 00:12:40.184 ==> default: -- Storage pool: default 00:12:40.184 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732682086_df23b4a6cd77aa98c025.img (20G) 00:12:40.184 ==> default: -- Volume Cache: default 00:12:40.184 ==> default: -- Kernel: 00:12:40.184 ==> default: -- Initrd: 00:12:40.184 ==> default: -- Graphics Type: vnc 00:12:40.184 ==> default: -- Graphics Port: -1 00:12:40.184 ==> default: -- Graphics IP: 127.0.0.1 00:12:40.184 ==> default: -- Graphics Password: Not defined 00:12:40.184 ==> default: -- Video Type: cirrus 00:12:40.184 ==> default: -- Video VRAM: 9216 00:12:40.184 ==> default: -- Sound Type: 00:12:40.184 ==> default: -- Keymap: en-us 00:12:40.184 ==> default: -- TPM Path: 00:12:40.184 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:40.184 ==> default: -- Command line args: 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:40.184 ==> default: -> value=-drive, 00:12:40.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:12:40.184 ==> default: -> value=-drive, 00:12:40.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:12:40.184 ==> default: -> value=-drive, 00:12:40.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:40.184 ==> default: -> value=-drive, 00:12:40.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:12:40.184 ==> default: -> value=-device, 00:12:40.184 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:40.184 ==> default: -> value=-drive, 00:12:40.185 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:12:40.185 ==> default: -> value=-device, 00:12:40.185 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:40.185 ==> default: -> value=-device, 00:12:40.185 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:12:40.185 ==> default: -> value=-device, 00:12:40.185 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:12:40.185 ==> default: -> value=-drive, 00:12:40.185 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:12:40.185 ==> default: -> value=-device, 00:12:40.185 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:40.444 ==> default: Creating shared folders metadata... 00:12:40.444 ==> default: Starting domain. 00:12:41.821 ==> default: Waiting for domain to get an IP address... 00:12:56.711 ==> default: Waiting for SSH to become available... 00:12:57.645 ==> default: Configuring and enabling network interfaces... 00:13:01.830 default: SSH address: 192.168.121.76:22 00:13:01.830 default: SSH username: vagrant 00:13:01.830 default: SSH auth method: private key 00:13:03.204 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:11.376 ==> default: Mounting SSHFS shared folder... 00:13:12.309 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:13:12.309 ==> default: Checking Mount.. 00:13:13.300 ==> default: Folder Successfully Mounted! 00:13:13.300 00:13:13.300 SUCCESS! 00:13:13.300 00:13:13.300 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:13:13.300 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:13.300 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:13:13.300 00:13:13.309 [Pipeline] } 00:13:13.324 [Pipeline] // stage 00:13:13.335 [Pipeline] dir 00:13:13.335 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:13:13.337 [Pipeline] { 00:13:13.349 [Pipeline] catchError 00:13:13.351 [Pipeline] { 00:13:13.364 [Pipeline] sh 00:13:13.643 + vagrant ssh-config --host vagrant 00:13:13.643 + sed -ne '/^Host/,$p' 00:13:13.643 + tee ssh_conf 00:13:16.173 Host vagrant 00:13:16.173 HostName 192.168.121.76 00:13:16.173 User vagrant 00:13:16.173 Port 22 00:13:16.173 UserKnownHostsFile /dev/null 00:13:16.173 StrictHostKeyChecking no 00:13:16.173 PasswordAuthentication no 00:13:16.173 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:13:16.173 IdentitiesOnly yes 00:13:16.173 LogLevel FATAL 00:13:16.173 ForwardAgent yes 00:13:16.173 ForwardX11 yes 00:13:16.173 00:13:16.184 [Pipeline] withEnv 00:13:16.186 [Pipeline] { 00:13:16.199 [Pipeline] sh 00:13:16.481 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:13:16.481 source /etc/os-release 00:13:16.481 [[ -e /image.version ]] && img=$(< /image.version) 00:13:16.481 # Minimal, systemd-like check. 00:13:16.481 if [[ -e /.dockerenv ]]; then 00:13:16.481 # Clear garbage from the node'\''s name: 00:13:16.481 # agt-er_autotest_547-896 -> autotest_547-896 00:13:16.481 # $HOSTNAME is the actual container id 00:13:16.481 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:16.481 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:16.481 # We can assume this is a mount from a host where container is running, 00:13:16.481 # so fetch its hostname to easily identify the target swarm worker. 00:13:16.481 container="$(< /etc/hostname) ($agent)" 00:13:16.481 else 00:13:16.481 # Fallback 00:13:16.481 container=$agent 00:13:16.481 fi 00:13:16.481 fi 00:13:16.481 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:16.481 ' 00:13:16.496 [Pipeline] } 00:13:16.512 [Pipeline] // withEnv 00:13:16.520 [Pipeline] setCustomBuildProperty 00:13:16.536 [Pipeline] stage 00:13:16.538 [Pipeline] { (Tests) 00:13:16.554 [Pipeline] sh 00:13:16.832 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:16.845 [Pipeline] sh 00:13:17.119 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:17.135 [Pipeline] timeout 00:13:17.135 Timeout set to expire in 50 min 00:13:17.137 [Pipeline] { 00:13:17.154 [Pipeline] sh 00:13:17.438 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:13:17.696 HEAD is now at 78decfef6 bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:13:17.707 [Pipeline] sh 00:13:17.984 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:13:17.998 [Pipeline] sh 00:13:18.274 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:18.548 [Pipeline] sh 00:13:18.894 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:13:18.894 ++ readlink -f spdk_repo 00:13:18.894 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:18.894 + [[ -n /home/vagrant/spdk_repo ]] 00:13:18.894 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:18.894 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:18.894 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:18.894 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:18.894 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:18.894 + [[ nvme-vg-autotest == pkgdep-* ]] 00:13:18.894 + cd /home/vagrant/spdk_repo 00:13:18.894 + source /etc/os-release 00:13:18.894 ++ NAME='Fedora Linux' 00:13:18.894 ++ VERSION='39 (Cloud Edition)' 00:13:18.894 ++ ID=fedora 00:13:18.894 ++ VERSION_ID=39 00:13:18.894 ++ VERSION_CODENAME= 00:13:18.894 ++ PLATFORM_ID=platform:f39 00:13:18.894 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:13:18.894 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:18.894 ++ LOGO=fedora-logo-icon 00:13:18.894 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:13:18.894 ++ HOME_URL=https://fedoraproject.org/ 00:13:18.894 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:13:18.894 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:18.894 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:18.894 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:18.894 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:13:18.894 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:18.894 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:13:18.894 ++ SUPPORT_END=2024-11-12 00:13:18.894 ++ VARIANT='Cloud Edition' 00:13:18.894 ++ VARIANT_ID=cloud 00:13:18.894 + uname -a 00:13:18.894 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:13:18.894 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:19.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.410 Hugepages 00:13:19.410 node hugesize free / total 00:13:19.410 node0 1048576kB 0 / 0 00:13:19.410 node0 2048kB 0 / 0 00:13:19.410 00:13:19.410 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:19.410 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:19.410 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:19.410 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:13:19.410 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:13:19.410 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:13:19.669 + rm -f /tmp/spdk-ld-path 00:13:19.669 + source autorun-spdk.conf 00:13:19.669 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:19.669 ++ SPDK_TEST_NVME=1 00:13:19.669 ++ SPDK_TEST_FTL=1 00:13:19.669 ++ SPDK_TEST_ISAL=1 00:13:19.669 ++ SPDK_RUN_ASAN=1 00:13:19.669 ++ SPDK_RUN_UBSAN=1 00:13:19.669 ++ SPDK_TEST_XNVME=1 00:13:19.669 ++ SPDK_TEST_NVME_FDP=1 00:13:19.669 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:19.669 ++ RUN_NIGHTLY=0 00:13:19.669 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:19.669 + [[ -n '' ]] 00:13:19.669 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:19.669 + for M in /var/spdk/build-*-manifest.txt 00:13:19.669 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:13:19.669 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:19.669 + for M in /var/spdk/build-*-manifest.txt 00:13:19.669 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:19.669 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:19.669 + for M in /var/spdk/build-*-manifest.txt 00:13:19.669 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:19.669 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:19.669 ++ uname 00:13:19.669 + [[ Linux == \L\i\n\u\x ]] 00:13:19.669 + sudo dmesg -T 00:13:19.669 + sudo dmesg --clear 00:13:19.669 + dmesg_pid=5031 00:13:19.669 + [[ Fedora Linux == FreeBSD ]] 00:13:19.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:19.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:19.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:19.669 + sudo dmesg -Tw 00:13:19.669 + [[ -x /usr/src/fio-static/fio ]] 00:13:19.669 + export FIO_BIN=/usr/src/fio-static/fio 00:13:19.669 + FIO_BIN=/usr/src/fio-static/fio 00:13:19.669 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:19.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:19.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:19.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:19.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:19.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:19.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:19.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:19.669 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:19.669 04:35:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:13:19.669 04:35:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:19.669 04:35:26 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:13:19.669 04:35:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:13:19.669 04:35:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:19.669 04:35:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:13:19.669 04:35:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.669 04:35:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:13:19.669 04:35:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:19.669 04:35:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.669 04:35:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.669 04:35:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.669 04:35:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.669 04:35:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.669 04:35:26 -- paths/export.sh@5 -- $ export PATH 00:13:19.669 04:35:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.669 04:35:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:19.669 04:35:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:13:19.669 04:35:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732682126.XXXXXX 00:13:19.669 04:35:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732682126.Ieo0vc 00:13:19.669 04:35:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:13:19.669 04:35:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:13:19.669 04:35:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:19.669 04:35:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:19.669 04:35:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:19.669 04:35:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:13:19.669 04:35:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:13:19.669 04:35:26 -- common/autotest_common.sh@10 -- $ set +x 00:13:19.670 04:35:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:13:19.670 04:35:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:13:19.670 04:35:26 -- pm/common@17 -- $ local monitor 00:13:19.670 04:35:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:19.670 04:35:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:19.670 04:35:26 -- pm/common@25 -- $ sleep 1 00:13:19.670 04:35:26 -- pm/common@21 -- $ date +%s 00:13:19.670 04:35:26 -- pm/common@21 -- $ date +%s 00:13:19.670 04:35:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732682126 00:13:19.670 04:35:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732682126 00:13:19.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732682126_collect-cpu-load.pm.log 00:13:19.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732682126_collect-vmstat.pm.log 00:13:21.044 04:35:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:13:21.044 04:35:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:21.044 04:35:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:21.044 04:35:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:21.044 04:35:27 -- spdk/autobuild.sh@16 -- $ date -u 00:13:21.044 Wed Nov 27 04:35:27 AM UTC 2024 00:13:21.044 04:35:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:21.044 v25.01-pre-276-g78decfef6 00:13:21.044 04:35:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:13:21.044 04:35:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:13:21.044 04:35:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:13:21.044 04:35:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:13:21.044 04:35:27 -- common/autotest_common.sh@10 -- $ set +x 00:13:21.044 ************************************ 00:13:21.044 START TEST asan 00:13:21.044 ************************************ 00:13:21.044 using asan 00:13:21.044 04:35:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:13:21.044 00:13:21.044 real 0m0.000s 00:13:21.044 user 0m0.000s 00:13:21.044 sys 0m0.000s 00:13:21.044 04:35:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:13:21.044 04:35:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:13:21.044 ************************************ 00:13:21.044 END TEST asan 00:13:21.044 ************************************ 00:13:21.044 04:35:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:21.044 04:35:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:21.044 04:35:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:13:21.044 04:35:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:13:21.044 04:35:27 -- common/autotest_common.sh@10 -- $ set +x 00:13:21.044 ************************************ 00:13:21.044 START TEST ubsan 00:13:21.044 ************************************ 00:13:21.044 using ubsan 00:13:21.044 04:35:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:13:21.044 00:13:21.044 real 0m0.000s 00:13:21.044 user 0m0.000s 00:13:21.044 sys 0m0.000s 00:13:21.044 04:35:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:13:21.044 04:35:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:21.044 ************************************ 00:13:21.044 END TEST ubsan 00:13:21.044 ************************************ 00:13:21.044 04:35:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:21.044 04:35:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:21.044 04:35:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:21.044 04:35:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:13:21.044 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:21.044 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:21.302 Using 'verbs' RDMA provider 00:13:31.827 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:13:41.794 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:13:42.359 Creating mk/config.mk...done. 00:13:42.359 Creating mk/cc.flags.mk...done. 00:13:42.359 Type 'make' to build. 00:13:42.359 04:35:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:13:42.359 04:35:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:13:42.359 04:35:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:13:42.359 04:35:49 -- common/autotest_common.sh@10 -- $ set +x 00:13:42.359 ************************************ 00:13:42.359 START TEST make 00:13:42.359 ************************************ 00:13:42.359 04:35:49 make -- common/autotest_common.sh@1129 -- $ make -j10 00:13:42.359 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:13:42.359 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:13:42.359 meson setup builddir \ 00:13:42.359 -Dwith-libaio=enabled \ 00:13:42.359 -Dwith-liburing=enabled \ 00:13:42.359 -Dwith-libvfn=disabled \ 00:13:42.359 -Dwith-spdk=disabled \ 00:13:42.359 -Dexamples=false \ 00:13:42.359 -Dtests=false \ 00:13:42.359 -Dtools=false && \ 00:13:42.359 meson compile -C builddir && \ 00:13:42.359 cd -) 00:13:42.359 make[1]: Nothing to be done for 'all'. 00:13:44.889 The Meson build system 00:13:44.889 Version: 1.5.0 00:13:44.889 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:13:44.889 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:13:44.889 Build type: native build 00:13:44.889 Project name: xnvme 00:13:44.889 Project version: 0.7.5 00:13:44.889 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:44.889 C linker for the host machine: cc ld.bfd 2.40-14 00:13:44.889 Host machine cpu family: x86_64 00:13:44.889 Host machine cpu: x86_64 00:13:44.889 Message: host_machine.system: linux 00:13:44.889 Compiler for C supports arguments -Wno-missing-braces: YES 00:13:44.889 Compiler for C supports arguments -Wno-cast-function-type: YES 00:13:44.889 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:13:44.889 Run-time dependency threads found: YES 00:13:44.889 Has header "setupapi.h" : NO 00:13:44.889 Has header "linux/blkzoned.h" : YES 00:13:44.889 Has header "linux/blkzoned.h" : YES (cached) 00:13:44.889 Has header "libaio.h" : YES 00:13:44.889 Library aio found: YES 00:13:44.889 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:44.889 Run-time dependency liburing found: YES 2.2 00:13:44.889 Dependency libvfn skipped: feature with-libvfn disabled 00:13:44.889 Found CMake: /usr/bin/cmake (3.27.7) 00:13:44.889 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:13:44.889 Subproject spdk : skipped: feature with-spdk disabled 00:13:44.889 Run-time dependency appleframeworks found: NO (tried framework) 00:13:44.889 Run-time dependency appleframeworks found: NO (tried framework) 00:13:44.889 Library rt found: YES 00:13:44.889 Checking for function "clock_gettime" with dependency -lrt: YES 00:13:44.889 Configuring xnvme_config.h using configuration 00:13:44.889 Configuring xnvme.spec using configuration 00:13:44.889 Run-time dependency bash-completion found: YES 2.11 00:13:44.889 Message: Bash-completions: /usr/share/bash-completion/completions 00:13:44.889 Program cp found: YES (/usr/bin/cp) 00:13:44.889 Build targets in project: 3 00:13:44.889 00:13:44.889 xnvme 0.7.5 00:13:44.889 00:13:44.889 Subprojects 00:13:44.889 spdk : NO Feature 'with-spdk' disabled 00:13:44.889 00:13:44.889 User defined options 00:13:44.889 examples : false 00:13:44.889 tests : false 00:13:44.889 tools : false 00:13:44.889 with-libaio : enabled 00:13:44.889 with-liburing: enabled 00:13:44.889 with-libvfn : disabled 00:13:44.889 with-spdk : disabled 00:13:44.889 00:13:44.889 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:44.889 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:13:45.147 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:13:45.147 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:13:45.147 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:13:45.147 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:13:45.147 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:13:45.147 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:13:45.147 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:13:45.147 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:13:45.147 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:13:45.147 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:13:45.147 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:13:45.147 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:13:45.147 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:13:45.147 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:13:45.147 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:13:45.405 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:13:45.405 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:13:45.405 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:13:45.405 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:13:45.405 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:13:45.405 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:13:45.405 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:13:45.405 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:13:45.405 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:13:45.405 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:13:45.405 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:13:45.405 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:13:45.405 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:13:45.405 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:13:45.405 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:13:45.405 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:13:45.405 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:13:45.405 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:13:45.405 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:13:45.405 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:13:45.405 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:13:45.405 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:13:45.405 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:13:45.405 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:13:45.405 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:13:45.405 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:13:45.405 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:13:45.405 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:13:45.405 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:13:45.405 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:13:45.405 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:13:45.405 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:13:45.405 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:13:45.405 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:13:45.405 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:13:45.405 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:13:45.405 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:13:45.405 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:13:45.663 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:13:45.663 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:13:45.663 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:13:45.664 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:13:45.664 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:13:45.664 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:13:45.664 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:13:45.664 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:13:45.664 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:13:45.664 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:13:45.664 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:13:45.664 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:13:45.664 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:13:45.664 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:13:45.664 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:13:45.664 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:13:45.922 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:13:45.922 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:13:45.922 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:13:45.922 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:13:46.180 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:13:46.180 [75/76] Linking static target lib/libxnvme.a 00:13:46.180 [76/76] Linking target lib/libxnvme.so.0.7.5 00:13:46.180 INFO: autodetecting backend as ninja 00:13:46.180 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:13:46.180 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:13:52.814 The Meson build system 00:13:52.814 Version: 1.5.0 00:13:52.814 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:52.814 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:52.814 Build type: native build 00:13:52.814 Program cat found: YES (/usr/bin/cat) 00:13:52.814 Project name: DPDK 00:13:52.814 Project version: 24.03.0 00:13:52.814 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:52.814 C linker for the host machine: cc ld.bfd 2.40-14 00:13:52.814 Host machine cpu family: x86_64 00:13:52.814 Host machine cpu: x86_64 00:13:52.814 Message: ## Building in Developer Mode ## 00:13:52.814 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:52.814 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:52.814 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:52.814 Program python3 found: YES (/usr/bin/python3) 00:13:52.814 Program cat found: YES (/usr/bin/cat) 00:13:52.814 Compiler for C supports arguments -march=native: YES 00:13:52.814 Checking for size of "void *" : 8 00:13:52.814 Checking for size of "void *" : 8 (cached) 00:13:52.814 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:13:52.814 Library m found: YES 00:13:52.814 Library numa found: YES 00:13:52.814 Has header "numaif.h" : YES 00:13:52.814 Library fdt found: NO 00:13:52.814 Library execinfo found: NO 00:13:52.814 Has header "execinfo.h" : YES 00:13:52.814 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:52.814 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:52.814 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:52.814 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:52.814 Run-time dependency openssl found: YES 3.1.1 00:13:52.814 Run-time dependency libpcap found: YES 1.10.4 00:13:52.814 Has header "pcap.h" with dependency libpcap: YES 00:13:52.814 Compiler for C supports arguments -Wcast-qual: YES 00:13:52.815 Compiler for C supports arguments -Wdeprecated: YES 00:13:52.815 Compiler for C supports arguments -Wformat: YES 00:13:52.815 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:52.815 Compiler for C supports arguments -Wformat-security: NO 00:13:52.815 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:52.815 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:52.815 Compiler for C supports arguments -Wnested-externs: YES 00:13:52.815 Compiler for C supports arguments -Wold-style-definition: YES 00:13:52.815 Compiler for C supports arguments -Wpointer-arith: YES 00:13:52.815 Compiler for C supports arguments -Wsign-compare: YES 00:13:52.815 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:52.815 Compiler for C supports arguments -Wundef: YES 00:13:52.815 Compiler for C supports arguments -Wwrite-strings: YES 00:13:52.815 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:52.815 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:52.815 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:52.815 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:52.815 Program objdump found: YES (/usr/bin/objdump) 00:13:52.815 Compiler for C supports arguments -mavx512f: YES 00:13:52.815 Checking if "AVX512 checking" compiles: YES 00:13:52.815 Fetching value of define "__SSE4_2__" : 1 00:13:52.815 Fetching value of define "__AES__" : 1 00:13:52.815 Fetching value of define "__AVX__" : 1 00:13:52.815 Fetching value of define "__AVX2__" : 1 00:13:52.815 Fetching value of define "__AVX512BW__" : 1 00:13:52.815 Fetching value of define "__AVX512CD__" : 1 00:13:52.815 Fetching value of define "__AVX512DQ__" : 1 00:13:52.815 Fetching value of define "__AVX512F__" : 1 00:13:52.815 Fetching value of define "__AVX512VL__" : 1 00:13:52.815 Fetching value of define "__PCLMUL__" : 1 00:13:52.815 Fetching value of define "__RDRND__" : 1 00:13:52.815 Fetching value of define "__RDSEED__" : 1 00:13:52.815 Fetching value of define "__VPCLMULQDQ__" : 1 00:13:52.815 Fetching value of define "__znver1__" : (undefined) 00:13:52.815 Fetching value of define "__znver2__" : (undefined) 00:13:52.815 Fetching value of define "__znver3__" : (undefined) 00:13:52.815 Fetching value of define "__znver4__" : (undefined) 00:13:52.815 Library asan found: YES 00:13:52.815 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:52.815 Message: lib/log: Defining dependency "log" 00:13:52.815 Message: lib/kvargs: Defining dependency "kvargs" 00:13:52.815 Message: lib/telemetry: Defining dependency "telemetry" 00:13:52.815 Library rt found: YES 00:13:52.815 Checking for function "getentropy" : NO 00:13:52.815 Message: lib/eal: Defining dependency "eal" 00:13:52.815 Message: lib/ring: Defining dependency "ring" 00:13:52.815 Message: lib/rcu: Defining dependency "rcu" 00:13:52.815 Message: lib/mempool: Defining dependency "mempool" 00:13:52.815 Message: lib/mbuf: Defining dependency "mbuf" 00:13:52.815 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:52.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:52.815 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:52.815 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:52.815 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:52.815 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:13:52.815 Compiler for C supports arguments -mpclmul: YES 00:13:52.815 Compiler for C supports arguments -maes: YES 00:13:52.815 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:52.815 Compiler for C supports arguments -mavx512bw: YES 00:13:52.815 Compiler for C supports arguments -mavx512dq: YES 00:13:52.815 Compiler for C supports arguments -mavx512vl: YES 00:13:52.815 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:52.815 Compiler for C supports arguments -mavx2: YES 00:13:52.815 Compiler for C supports arguments -mavx: YES 00:13:52.815 Message: lib/net: Defining dependency "net" 00:13:52.815 Message: lib/meter: Defining dependency "meter" 00:13:52.815 Message: lib/ethdev: Defining dependency "ethdev" 00:13:52.815 Message: lib/pci: Defining dependency "pci" 00:13:52.815 Message: lib/cmdline: Defining dependency "cmdline" 00:13:52.815 Message: lib/hash: Defining dependency "hash" 00:13:52.815 Message: lib/timer: Defining dependency "timer" 00:13:52.815 Message: lib/compressdev: Defining dependency "compressdev" 00:13:52.815 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:52.815 Message: lib/dmadev: Defining dependency "dmadev" 00:13:52.815 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:52.815 Message: lib/power: Defining dependency "power" 00:13:52.815 Message: lib/reorder: Defining dependency "reorder" 00:13:52.815 Message: lib/security: Defining dependency "security" 00:13:52.815 Has header "linux/userfaultfd.h" : YES 00:13:52.815 Has header "linux/vduse.h" : YES 00:13:52.815 Message: lib/vhost: Defining dependency "vhost" 00:13:52.815 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:52.815 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:52.815 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:52.815 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:52.815 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:52.815 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:52.815 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:52.815 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:52.815 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:52.815 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:52.815 Program doxygen found: YES (/usr/local/bin/doxygen) 00:13:52.815 Configuring doxy-api-html.conf using configuration 00:13:52.815 Configuring doxy-api-man.conf using configuration 00:13:52.815 Program mandb found: YES (/usr/bin/mandb) 00:13:52.815 Program sphinx-build found: NO 00:13:52.815 Configuring rte_build_config.h using configuration 00:13:52.815 Message: 00:13:52.815 ================= 00:13:52.815 Applications Enabled 00:13:52.815 ================= 00:13:52.815 00:13:52.815 apps: 00:13:52.815 00:13:52.815 00:13:52.815 Message: 00:13:52.815 ================= 00:13:52.815 Libraries Enabled 00:13:52.816 ================= 00:13:52.816 00:13:52.816 libs: 00:13:52.816 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:52.816 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:52.816 cryptodev, dmadev, power, reorder, security, vhost, 00:13:52.816 00:13:52.816 Message: 00:13:52.816 =============== 00:13:52.816 Drivers Enabled 00:13:52.816 =============== 00:13:52.816 00:13:52.816 common: 00:13:52.816 00:13:52.816 bus: 00:13:52.816 pci, vdev, 00:13:52.816 mempool: 00:13:52.816 ring, 00:13:52.816 dma: 00:13:52.816 00:13:52.816 net: 00:13:52.816 00:13:52.816 crypto: 00:13:52.816 00:13:52.816 compress: 00:13:52.816 00:13:52.816 vdpa: 00:13:52.816 00:13:52.816 00:13:52.816 Message: 00:13:52.816 ================= 00:13:52.816 Content Skipped 00:13:52.816 ================= 00:13:52.816 00:13:52.816 apps: 00:13:52.816 dumpcap: explicitly disabled via build config 00:13:52.816 graph: explicitly disabled via build config 00:13:52.816 pdump: explicitly disabled via build config 00:13:52.816 proc-info: explicitly disabled via build config 00:13:52.816 test-acl: explicitly disabled via build config 00:13:52.816 test-bbdev: explicitly disabled via build config 00:13:52.816 test-cmdline: explicitly disabled via build config 00:13:52.816 test-compress-perf: explicitly disabled via build config 00:13:52.816 test-crypto-perf: explicitly disabled via build config 00:13:52.816 test-dma-perf: explicitly disabled via build config 00:13:52.816 test-eventdev: explicitly disabled via build config 00:13:52.816 test-fib: explicitly disabled via build config 00:13:52.816 test-flow-perf: explicitly disabled via build config 00:13:52.816 test-gpudev: explicitly disabled via build config 00:13:52.816 test-mldev: explicitly disabled via build config 00:13:52.816 test-pipeline: explicitly disabled via build config 00:13:52.816 test-pmd: explicitly disabled via build config 00:13:52.816 test-regex: explicitly disabled via build config 00:13:52.816 test-sad: explicitly disabled via build config 00:13:52.816 test-security-perf: explicitly disabled via build config 00:13:52.816 00:13:52.816 libs: 00:13:52.816 argparse: explicitly disabled via build config 00:13:52.816 metrics: explicitly disabled via build config 00:13:52.816 acl: explicitly disabled via build config 00:13:52.816 bbdev: explicitly disabled via build config 00:13:52.816 bitratestats: explicitly disabled via build config 00:13:52.816 bpf: explicitly disabled via build config 00:13:52.816 cfgfile: explicitly disabled via build config 00:13:52.816 distributor: explicitly disabled via build config 00:13:52.816 efd: explicitly disabled via build config 00:13:52.816 eventdev: explicitly disabled via build config 00:13:52.816 dispatcher: explicitly disabled via build config 00:13:52.816 gpudev: explicitly disabled via build config 00:13:52.816 gro: explicitly disabled via build config 00:13:52.816 gso: explicitly disabled via build config 00:13:52.816 ip_frag: explicitly disabled via build config 00:13:52.816 jobstats: explicitly disabled via build config 00:13:52.816 latencystats: explicitly disabled via build config 00:13:52.816 lpm: explicitly disabled via build config 00:13:52.816 member: explicitly disabled via build config 00:13:52.816 pcapng: explicitly disabled via build config 00:13:52.816 rawdev: explicitly disabled via build config 00:13:52.816 regexdev: explicitly disabled via build config 00:13:52.816 mldev: explicitly disabled via build config 00:13:52.816 rib: explicitly disabled via build config 00:13:52.816 sched: explicitly disabled via build config 00:13:52.816 stack: explicitly disabled via build config 00:13:52.816 ipsec: explicitly disabled via build config 00:13:52.816 pdcp: explicitly disabled via build config 00:13:52.816 fib: explicitly disabled via build config 00:13:52.816 port: explicitly disabled via build config 00:13:52.816 pdump: explicitly disabled via build config 00:13:52.816 table: explicitly disabled via build config 00:13:52.816 pipeline: explicitly disabled via build config 00:13:52.816 graph: explicitly disabled via build config 00:13:52.816 node: explicitly disabled via build config 00:13:52.816 00:13:52.816 drivers: 00:13:52.816 common/cpt: not in enabled drivers build config 00:13:52.816 common/dpaax: not in enabled drivers build config 00:13:52.816 common/iavf: not in enabled drivers build config 00:13:52.816 common/idpf: not in enabled drivers build config 00:13:52.816 common/ionic: not in enabled drivers build config 00:13:52.816 common/mvep: not in enabled drivers build config 00:13:52.816 common/octeontx: not in enabled drivers build config 00:13:52.816 bus/auxiliary: not in enabled drivers build config 00:13:52.816 bus/cdx: not in enabled drivers build config 00:13:52.816 bus/dpaa: not in enabled drivers build config 00:13:52.816 bus/fslmc: not in enabled drivers build config 00:13:52.816 bus/ifpga: not in enabled drivers build config 00:13:52.816 bus/platform: not in enabled drivers build config 00:13:52.816 bus/uacce: not in enabled drivers build config 00:13:52.816 bus/vmbus: not in enabled drivers build config 00:13:52.816 common/cnxk: not in enabled drivers build config 00:13:52.816 common/mlx5: not in enabled drivers build config 00:13:52.816 common/nfp: not in enabled drivers build config 00:13:52.816 common/nitrox: not in enabled drivers build config 00:13:52.816 common/qat: not in enabled drivers build config 00:13:52.816 common/sfc_efx: not in enabled drivers build config 00:13:52.816 mempool/bucket: not in enabled drivers build config 00:13:52.816 mempool/cnxk: not in enabled drivers build config 00:13:52.816 mempool/dpaa: not in enabled drivers build config 00:13:52.816 mempool/dpaa2: not in enabled drivers build config 00:13:52.816 mempool/octeontx: not in enabled drivers build config 00:13:52.816 mempool/stack: not in enabled drivers build config 00:13:52.816 dma/cnxk: not in enabled drivers build config 00:13:52.816 dma/dpaa: not in enabled drivers build config 00:13:52.816 dma/dpaa2: not in enabled drivers build config 00:13:52.816 dma/hisilicon: not in enabled drivers build config 00:13:52.817 dma/idxd: not in enabled drivers build config 00:13:52.817 dma/ioat: not in enabled drivers build config 00:13:52.817 dma/skeleton: not in enabled drivers build config 00:13:52.817 net/af_packet: not in enabled drivers build config 00:13:52.817 net/af_xdp: not in enabled drivers build config 00:13:52.817 net/ark: not in enabled drivers build config 00:13:52.817 net/atlantic: not in enabled drivers build config 00:13:52.817 net/avp: not in enabled drivers build config 00:13:52.817 net/axgbe: not in enabled drivers build config 00:13:52.817 net/bnx2x: not in enabled drivers build config 00:13:52.817 net/bnxt: not in enabled drivers build config 00:13:52.817 net/bonding: not in enabled drivers build config 00:13:52.817 net/cnxk: not in enabled drivers build config 00:13:52.817 net/cpfl: not in enabled drivers build config 00:13:52.817 net/cxgbe: not in enabled drivers build config 00:13:52.817 net/dpaa: not in enabled drivers build config 00:13:52.817 net/dpaa2: not in enabled drivers build config 00:13:52.817 net/e1000: not in enabled drivers build config 00:13:52.817 net/ena: not in enabled drivers build config 00:13:52.817 net/enetc: not in enabled drivers build config 00:13:52.817 net/enetfec: not in enabled drivers build config 00:13:52.817 net/enic: not in enabled drivers build config 00:13:52.817 net/failsafe: not in enabled drivers build config 00:13:52.817 net/fm10k: not in enabled drivers build config 00:13:52.817 net/gve: not in enabled drivers build config 00:13:52.817 net/hinic: not in enabled drivers build config 00:13:52.817 net/hns3: not in enabled drivers build config 00:13:52.817 net/i40e: not in enabled drivers build config 00:13:52.817 net/iavf: not in enabled drivers build config 00:13:52.817 net/ice: not in enabled drivers build config 00:13:52.817 net/idpf: not in enabled drivers build config 00:13:52.817 net/igc: not in enabled drivers build config 00:13:52.817 net/ionic: not in enabled drivers build config 00:13:52.817 net/ipn3ke: not in enabled drivers build config 00:13:52.817 net/ixgbe: not in enabled drivers build config 00:13:52.817 net/mana: not in enabled drivers build config 00:13:52.817 net/memif: not in enabled drivers build config 00:13:52.817 net/mlx4: not in enabled drivers build config 00:13:52.817 net/mlx5: not in enabled drivers build config 00:13:52.817 net/mvneta: not in enabled drivers build config 00:13:52.817 net/mvpp2: not in enabled drivers build config 00:13:52.817 net/netvsc: not in enabled drivers build config 00:13:52.817 net/nfb: not in enabled drivers build config 00:13:52.817 net/nfp: not in enabled drivers build config 00:13:52.817 net/ngbe: not in enabled drivers build config 00:13:52.817 net/null: not in enabled drivers build config 00:13:52.817 net/octeontx: not in enabled drivers build config 00:13:52.817 net/octeon_ep: not in enabled drivers build config 00:13:52.817 net/pcap: not in enabled drivers build config 00:13:52.817 net/pfe: not in enabled drivers build config 00:13:52.817 net/qede: not in enabled drivers build config 00:13:52.817 net/ring: not in enabled drivers build config 00:13:52.817 net/sfc: not in enabled drivers build config 00:13:52.817 net/softnic: not in enabled drivers build config 00:13:52.817 net/tap: not in enabled drivers build config 00:13:52.817 net/thunderx: not in enabled drivers build config 00:13:52.817 net/txgbe: not in enabled drivers build config 00:13:52.817 net/vdev_netvsc: not in enabled drivers build config 00:13:52.817 net/vhost: not in enabled drivers build config 00:13:52.817 net/virtio: not in enabled drivers build config 00:13:52.817 net/vmxnet3: not in enabled drivers build config 00:13:52.817 raw/*: missing internal dependency, "rawdev" 00:13:52.817 crypto/armv8: not in enabled drivers build config 00:13:52.817 crypto/bcmfs: not in enabled drivers build config 00:13:52.817 crypto/caam_jr: not in enabled drivers build config 00:13:52.817 crypto/ccp: not in enabled drivers build config 00:13:52.817 crypto/cnxk: not in enabled drivers build config 00:13:52.817 crypto/dpaa_sec: not in enabled drivers build config 00:13:52.817 crypto/dpaa2_sec: not in enabled drivers build config 00:13:52.817 crypto/ipsec_mb: not in enabled drivers build config 00:13:52.817 crypto/mlx5: not in enabled drivers build config 00:13:52.817 crypto/mvsam: not in enabled drivers build config 00:13:52.817 crypto/nitrox: not in enabled drivers build config 00:13:52.817 crypto/null: not in enabled drivers build config 00:13:52.817 crypto/octeontx: not in enabled drivers build config 00:13:52.817 crypto/openssl: not in enabled drivers build config 00:13:52.817 crypto/scheduler: not in enabled drivers build config 00:13:52.817 crypto/uadk: not in enabled drivers build config 00:13:52.817 crypto/virtio: not in enabled drivers build config 00:13:52.817 compress/isal: not in enabled drivers build config 00:13:52.817 compress/mlx5: not in enabled drivers build config 00:13:52.817 compress/nitrox: not in enabled drivers build config 00:13:52.817 compress/octeontx: not in enabled drivers build config 00:13:52.817 compress/zlib: not in enabled drivers build config 00:13:52.817 regex/*: missing internal dependency, "regexdev" 00:13:52.817 ml/*: missing internal dependency, "mldev" 00:13:52.817 vdpa/ifc: not in enabled drivers build config 00:13:52.817 vdpa/mlx5: not in enabled drivers build config 00:13:52.817 vdpa/nfp: not in enabled drivers build config 00:13:52.817 vdpa/sfc: not in enabled drivers build config 00:13:52.817 event/*: missing internal dependency, "eventdev" 00:13:52.817 baseband/*: missing internal dependency, "bbdev" 00:13:52.817 gpu/*: missing internal dependency, "gpudev" 00:13:52.817 00:13:52.817 00:13:52.817 Build targets in project: 84 00:13:52.817 00:13:52.817 DPDK 24.03.0 00:13:52.817 00:13:52.817 User defined options 00:13:52.817 buildtype : debug 00:13:52.817 default_library : shared 00:13:52.817 libdir : lib 00:13:52.817 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:52.817 b_sanitize : address 00:13:52.817 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:52.817 c_link_args : 00:13:52.817 cpu_instruction_set: native 00:13:52.817 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:52.818 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:52.818 enable_docs : false 00:13:52.818 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:13:52.818 enable_kmods : false 00:13:52.818 max_lcores : 128 00:13:52.818 tests : false 00:13:52.818 00:13:52.818 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:53.077 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:53.077 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:53.077 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:53.077 [3/267] Linking static target lib/librte_kvargs.a 00:13:53.077 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:53.077 [5/267] Linking static target lib/librte_log.a 00:13:53.077 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:53.336 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:53.336 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:53.594 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:53.594 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:53.594 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:53.594 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:53.594 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.594 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:53.594 [15/267] Linking static target lib/librte_telemetry.a 00:13:53.594 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:53.594 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:53.594 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:53.852 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:53.852 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:53.852 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.852 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:53.852 [23/267] Linking target lib/librte_log.so.24.1 00:13:53.852 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:54.110 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:54.110 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:54.110 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:54.110 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:13:54.110 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:54.110 [30/267] Linking target lib/librte_kvargs.so.24.1 00:13:54.369 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:54.369 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:54.369 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.369 [34/267] Linking target lib/librte_telemetry.so.24.1 00:13:54.369 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:54.369 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:54.369 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:54.369 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:54.369 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:54.369 [40/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:13:54.369 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:54.369 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:13:54.369 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:54.627 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:54.627 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:54.627 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:54.627 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:54.885 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:54.885 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:54.885 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:54.885 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:54.885 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:54.885 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:55.144 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:55.144 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:55.144 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:55.144 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:55.144 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:55.144 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:55.144 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:55.144 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:55.451 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:55.451 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:55.451 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:55.451 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:55.451 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:55.451 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:55.709 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:55.709 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:55.709 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:55.709 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:55.709 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:55.709 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:55.709 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:55.709 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:55.968 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:55.968 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:55.968 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:55.968 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:56.226 [80/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:56.226 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:56.226 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:56.226 [83/267] Linking static target lib/librte_ring.a 00:13:56.226 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:56.226 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:56.226 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:56.226 [87/267] Linking static target lib/librte_eal.a 00:13:56.486 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:56.486 [89/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.486 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:56.486 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:56.744 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:56.744 [93/267] Linking static target lib/librte_mempool.a 00:13:56.744 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:56.744 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:56.744 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:57.004 [97/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:57.004 [98/267] Linking static target lib/librte_rcu.a 00:13:57.004 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:57.004 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:57.004 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:57.004 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:57.004 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:57.004 [104/267] Linking static target lib/librte_mbuf.a 00:13:57.262 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:57.262 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:57.262 [107/267] Linking static target lib/librte_meter.a 00:13:57.262 [108/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:13:57.262 [109/267] Linking static target lib/librte_net.a 00:13:57.262 [110/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.262 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:57.262 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:57.522 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:57.522 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.522 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.522 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.522 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:57.782 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:57.782 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:57.782 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:58.041 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:58.041 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:58.041 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:58.041 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:58.041 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:58.041 [126/267] Linking static target lib/librte_pci.a 00:13:58.300 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:58.300 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:58.300 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:58.300 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:58.300 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:58.300 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:13:58.300 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:58.300 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:58.300 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:58.300 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:58.300 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:58.560 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:58.560 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:58.560 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:58.560 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:58.560 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:58.560 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:13:58.560 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:58.560 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:58.560 [146/267] Linking static target lib/librte_cmdline.a 00:13:58.818 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:58.818 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:58.818 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:58.818 [150/267] Linking static target lib/librte_timer.a 00:13:58.818 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:59.077 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:59.077 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:59.077 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:59.335 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:59.335 [156/267] Linking static target lib/librte_compressdev.a 00:13:59.335 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:59.335 [158/267] Linking static target lib/librte_hash.a 00:13:59.335 [159/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:59.335 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:59.335 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:59.335 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:59.594 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:59.595 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:59.853 [165/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:59.853 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:59.853 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:59.853 [168/267] Linking static target lib/librte_ethdev.a 00:13:59.853 [169/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:59.853 [170/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:59.853 [171/267] Linking static target lib/librte_dmadev.a 00:13:59.853 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:59.853 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:59.853 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:14:00.112 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:14:00.112 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:14:00.112 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:14:00.112 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:14:00.112 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.112 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:14:00.370 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:14:00.370 [182/267] Linking static target lib/librte_power.a 00:14:00.628 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.628 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:14:00.629 [185/267] Linking static target lib/librte_reorder.a 00:14:00.629 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:14:00.629 [187/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:14:00.629 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:14:00.629 [189/267] Linking static target lib/librte_cryptodev.a 00:14:00.629 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:14:00.629 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:14:00.629 [192/267] Linking static target lib/librte_security.a 00:14:00.887 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:14:01.145 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:14:01.145 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:14:01.402 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:14:01.402 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:14:01.402 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:14:01.402 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:14:01.659 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:14:01.659 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:14:01.659 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:14:01.659 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:14:01.916 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:14:01.916 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:14:01.916 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:14:01.916 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:14:01.916 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:14:01.916 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:14:02.173 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:14:02.173 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:14:02.173 [212/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:02.173 [213/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:02.173 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:02.173 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:02.173 [216/267] Linking static target drivers/librte_bus_pci.a 00:14:02.173 [217/267] Linking static target drivers/librte_bus_vdev.a 00:14:02.173 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:14:02.173 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:14:02.431 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:02.431 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:14:02.431 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:02.431 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:02.431 [224/267] Linking static target drivers/librte_mempool_ring.a 00:14:02.431 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:02.688 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:02.945 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:14:03.877 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:03.877 [229/267] Linking target lib/librte_eal.so.24.1 00:14:03.877 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:14:04.136 [231/267] Linking target lib/librte_ring.so.24.1 00:14:04.136 [232/267] Linking target lib/librte_meter.so.24.1 00:14:04.136 [233/267] Linking target lib/librte_timer.so.24.1 00:14:04.136 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:14:04.136 [235/267] Linking target lib/librte_pci.so.24.1 00:14:04.136 [236/267] Linking target lib/librte_dmadev.so.24.1 00:14:04.136 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:14:04.136 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:14:04.136 [239/267] Linking target lib/librte_rcu.so.24.1 00:14:04.136 [240/267] Linking target lib/librte_mempool.so.24.1 00:14:04.136 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:14:04.136 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:14:04.136 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:14:04.136 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:14:04.136 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:14:04.136 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:14:04.393 [247/267] Linking target lib/librte_mbuf.so.24.1 00:14:04.393 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:14:04.393 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:14:04.393 [250/267] Linking target lib/librte_net.so.24.1 00:14:04.393 [251/267] Linking target lib/librte_compressdev.so.24.1 00:14:04.393 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:14:04.393 [253/267] Linking target lib/librte_reorder.so.24.1 00:14:04.393 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:14:04.650 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:14:04.650 [256/267] Linking target lib/librte_cmdline.so.24.1 00:14:04.650 [257/267] Linking target lib/librte_hash.so.24.1 00:14:04.650 [258/267] Linking target lib/librte_security.so.24.1 00:14:04.650 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:14:05.213 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:05.213 [261/267] Linking target lib/librte_ethdev.so.24.1 00:14:05.469 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:14:05.469 [263/267] Linking target lib/librte_power.so.24.1 00:14:05.726 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:14:05.726 [265/267] Linking static target lib/librte_vhost.a 00:14:07.141 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:07.142 [267/267] Linking target lib/librte_vhost.so.24.1 00:14:07.142 INFO: autodetecting backend as ninja 00:14:07.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:14:22.031 CC lib/log/log.o 00:14:22.031 CC lib/log/log_deprecated.o 00:14:22.031 CC lib/log/log_flags.o 00:14:22.031 CC lib/ut_mock/mock.o 00:14:22.031 CC lib/ut/ut.o 00:14:22.031 LIB libspdk_log.a 00:14:22.031 LIB libspdk_ut_mock.a 00:14:22.031 LIB libspdk_ut.a 00:14:22.031 SO libspdk_ut_mock.so.6.0 00:14:22.031 SO libspdk_log.so.7.1 00:14:22.031 SO libspdk_ut.so.2.0 00:14:22.031 SYMLINK libspdk_ut_mock.so 00:14:22.031 SYMLINK libspdk_log.so 00:14:22.031 SYMLINK libspdk_ut.so 00:14:22.031 CC lib/ioat/ioat.o 00:14:22.031 CC lib/dma/dma.o 00:14:22.031 CXX lib/trace_parser/trace.o 00:14:22.031 CC lib/util/cpuset.o 00:14:22.031 CC lib/util/bit_array.o 00:14:22.031 CC lib/util/base64.o 00:14:22.031 CC lib/util/crc16.o 00:14:22.031 CC lib/util/crc32c.o 00:14:22.031 CC lib/util/crc32.o 00:14:22.031 CC lib/vfio_user/host/vfio_user_pci.o 00:14:22.031 CC lib/util/crc32_ieee.o 00:14:22.031 CC lib/util/crc64.o 00:14:22.031 CC lib/util/dif.o 00:14:22.031 LIB libspdk_dma.a 00:14:22.031 CC lib/util/fd.o 00:14:22.031 SO libspdk_dma.so.5.0 00:14:22.031 CC lib/util/fd_group.o 00:14:22.031 CC lib/util/file.o 00:14:22.031 CC lib/util/hexlify.o 00:14:22.031 SYMLINK libspdk_dma.so 00:14:22.031 CC lib/util/iov.o 00:14:22.031 CC lib/util/math.o 00:14:22.031 LIB libspdk_ioat.a 00:14:22.031 SO libspdk_ioat.so.7.0 00:14:22.031 CC lib/util/net.o 00:14:22.031 SYMLINK libspdk_ioat.so 00:14:22.031 CC lib/vfio_user/host/vfio_user.o 00:14:22.031 CC lib/util/pipe.o 00:14:22.031 CC lib/util/strerror_tls.o 00:14:22.031 CC lib/util/string.o 00:14:22.031 CC lib/util/uuid.o 00:14:22.031 CC lib/util/xor.o 00:14:22.031 CC lib/util/zipf.o 00:14:22.031 CC lib/util/md5.o 00:14:22.031 LIB libspdk_vfio_user.a 00:14:22.031 SO libspdk_vfio_user.so.5.0 00:14:22.031 SYMLINK libspdk_vfio_user.so 00:14:22.031 LIB libspdk_util.a 00:14:22.031 SO libspdk_util.so.10.1 00:14:22.031 LIB libspdk_trace_parser.a 00:14:22.031 SO libspdk_trace_parser.so.6.0 00:14:22.031 SYMLINK libspdk_util.so 00:14:22.031 SYMLINK libspdk_trace_parser.so 00:14:22.031 CC lib/env_dpdk/env.o 00:14:22.031 CC lib/rdma_utils/rdma_utils.o 00:14:22.031 CC lib/env_dpdk/pci.o 00:14:22.031 CC lib/env_dpdk/memory.o 00:14:22.031 CC lib/env_dpdk/init.o 00:14:22.031 CC lib/vmd/vmd.o 00:14:22.031 CC lib/vmd/led.o 00:14:22.031 CC lib/conf/conf.o 00:14:22.031 CC lib/idxd/idxd.o 00:14:22.031 CC lib/json/json_parse.o 00:14:22.031 CC lib/json/json_util.o 00:14:22.031 LIB libspdk_rdma_utils.a 00:14:22.031 LIB libspdk_conf.a 00:14:22.031 SO libspdk_rdma_utils.so.1.0 00:14:22.031 CC lib/json/json_write.o 00:14:22.031 SO libspdk_conf.so.6.0 00:14:22.291 SYMLINK libspdk_rdma_utils.so 00:14:22.291 CC lib/env_dpdk/threads.o 00:14:22.291 SYMLINK libspdk_conf.so 00:14:22.291 CC lib/env_dpdk/pci_ioat.o 00:14:22.291 CC lib/env_dpdk/pci_virtio.o 00:14:22.291 CC lib/env_dpdk/pci_vmd.o 00:14:22.291 CC lib/env_dpdk/pci_idxd.o 00:14:22.291 CC lib/idxd/idxd_user.o 00:14:22.291 CC lib/rdma_provider/common.o 00:14:22.291 CC lib/env_dpdk/pci_event.o 00:14:22.291 LIB libspdk_json.a 00:14:22.291 CC lib/env_dpdk/sigbus_handler.o 00:14:22.291 CC lib/env_dpdk/pci_dpdk.o 00:14:22.548 SO libspdk_json.so.6.0 00:14:22.548 SYMLINK libspdk_json.so 00:14:22.548 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:22.548 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:22.548 CC lib/idxd/idxd_kernel.o 00:14:22.548 CC lib/rdma_provider/rdma_provider_verbs.o 00:14:22.548 LIB libspdk_vmd.a 00:14:22.548 SO libspdk_vmd.so.6.0 00:14:22.806 CC lib/jsonrpc/jsonrpc_server.o 00:14:22.806 CC lib/jsonrpc/jsonrpc_client.o 00:14:22.806 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:22.806 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:22.806 LIB libspdk_idxd.a 00:14:22.806 SYMLINK libspdk_vmd.so 00:14:22.806 SO libspdk_idxd.so.12.1 00:14:22.806 LIB libspdk_rdma_provider.a 00:14:22.806 SO libspdk_rdma_provider.so.7.0 00:14:22.806 SYMLINK libspdk_idxd.so 00:14:22.806 SYMLINK libspdk_rdma_provider.so 00:14:23.069 LIB libspdk_jsonrpc.a 00:14:23.069 SO libspdk_jsonrpc.so.6.0 00:14:23.069 SYMLINK libspdk_jsonrpc.so 00:14:23.327 CC lib/rpc/rpc.o 00:14:23.327 LIB libspdk_env_dpdk.a 00:14:23.327 SO libspdk_env_dpdk.so.15.1 00:14:23.585 LIB libspdk_rpc.a 00:14:23.585 SO libspdk_rpc.so.6.0 00:14:23.585 SYMLINK libspdk_env_dpdk.so 00:14:23.585 SYMLINK libspdk_rpc.so 00:14:23.842 CC lib/notify/notify_rpc.o 00:14:23.842 CC lib/trace/trace.o 00:14:23.842 CC lib/trace/trace_rpc.o 00:14:23.842 CC lib/notify/notify.o 00:14:23.842 CC lib/trace/trace_flags.o 00:14:23.842 CC lib/keyring/keyring_rpc.o 00:14:23.842 CC lib/keyring/keyring.o 00:14:23.842 LIB libspdk_notify.a 00:14:23.842 SO libspdk_notify.so.6.0 00:14:23.842 LIB libspdk_keyring.a 00:14:24.100 SO libspdk_keyring.so.2.0 00:14:24.100 SYMLINK libspdk_notify.so 00:14:24.100 LIB libspdk_trace.a 00:14:24.100 SYMLINK libspdk_keyring.so 00:14:24.100 SO libspdk_trace.so.11.0 00:14:24.100 SYMLINK libspdk_trace.so 00:14:24.359 CC lib/sock/sock_rpc.o 00:14:24.359 CC lib/sock/sock.o 00:14:24.359 CC lib/thread/thread.o 00:14:24.359 CC lib/thread/iobuf.o 00:14:24.617 LIB libspdk_sock.a 00:14:24.617 SO libspdk_sock.so.10.0 00:14:24.617 SYMLINK libspdk_sock.so 00:14:24.905 CC lib/nvme/nvme_fabric.o 00:14:24.905 CC lib/nvme/nvme_ctrlr.o 00:14:24.905 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:24.905 CC lib/nvme/nvme_ns_cmd.o 00:14:24.905 CC lib/nvme/nvme_pcie.o 00:14:24.905 CC lib/nvme/nvme_ns.o 00:14:24.905 CC lib/nvme/nvme.o 00:14:24.905 CC lib/nvme/nvme_pcie_common.o 00:14:24.905 CC lib/nvme/nvme_qpair.o 00:14:25.470 CC lib/nvme/nvme_quirks.o 00:14:25.470 CC lib/nvme/nvme_transport.o 00:14:25.470 CC lib/nvme/nvme_discovery.o 00:14:25.728 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:25.728 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:25.728 CC lib/nvme/nvme_tcp.o 00:14:25.728 CC lib/nvme/nvme_opal.o 00:14:25.728 CC lib/nvme/nvme_io_msg.o 00:14:25.985 LIB libspdk_thread.a 00:14:25.985 SO libspdk_thread.so.11.0 00:14:25.985 CC lib/nvme/nvme_poll_group.o 00:14:25.985 SYMLINK libspdk_thread.so 00:14:25.985 CC lib/nvme/nvme_zns.o 00:14:25.985 CC lib/nvme/nvme_stubs.o 00:14:25.985 CC lib/nvme/nvme_auth.o 00:14:26.243 CC lib/accel/accel.o 00:14:26.243 CC lib/accel/accel_rpc.o 00:14:26.243 CC lib/nvme/nvme_cuse.o 00:14:26.243 CC lib/accel/accel_sw.o 00:14:26.243 CC lib/nvme/nvme_rdma.o 00:14:26.807 CC lib/blob/blobstore.o 00:14:26.807 CC lib/init/json_config.o 00:14:26.807 CC lib/virtio/virtio.o 00:14:26.807 CC lib/fsdev/fsdev.o 00:14:27.128 CC lib/init/subsystem.o 00:14:27.128 CC lib/init/subsystem_rpc.o 00:14:27.128 CC lib/virtio/virtio_vhost_user.o 00:14:27.128 CC lib/virtio/virtio_vfio_user.o 00:14:27.128 CC lib/blob/request.o 00:14:27.128 CC lib/init/rpc.o 00:14:27.128 CC lib/virtio/virtio_pci.o 00:14:27.421 CC lib/blob/zeroes.o 00:14:27.421 LIB libspdk_accel.a 00:14:27.421 LIB libspdk_init.a 00:14:27.421 SO libspdk_accel.so.16.0 00:14:27.421 SO libspdk_init.so.6.0 00:14:27.421 CC lib/blob/blob_bs_dev.o 00:14:27.421 SYMLINK libspdk_init.so 00:14:27.421 SYMLINK libspdk_accel.so 00:14:27.421 CC lib/fsdev/fsdev_io.o 00:14:27.421 CC lib/fsdev/fsdev_rpc.o 00:14:27.421 LIB libspdk_virtio.a 00:14:27.421 LIB libspdk_nvme.a 00:14:27.421 SO libspdk_virtio.so.7.0 00:14:27.421 CC lib/bdev/bdev_rpc.o 00:14:27.421 CC lib/bdev/bdev.o 00:14:27.421 CC lib/bdev/bdev_zone.o 00:14:27.421 CC lib/event/app.o 00:14:27.421 SYMLINK libspdk_virtio.so 00:14:27.421 CC lib/bdev/part.o 00:14:27.421 CC lib/bdev/scsi_nvme.o 00:14:27.421 CC lib/event/reactor.o 00:14:27.679 SO libspdk_nvme.so.15.0 00:14:27.679 CC lib/event/log_rpc.o 00:14:27.679 LIB libspdk_fsdev.a 00:14:27.679 CC lib/event/app_rpc.o 00:14:27.679 SO libspdk_fsdev.so.2.0 00:14:27.679 SYMLINK libspdk_fsdev.so 00:14:27.679 CC lib/event/scheduler_static.o 00:14:27.936 SYMLINK libspdk_nvme.so 00:14:27.936 LIB libspdk_event.a 00:14:27.936 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:14:27.936 SO libspdk_event.so.14.0 00:14:28.193 SYMLINK libspdk_event.so 00:14:28.758 LIB libspdk_fuse_dispatcher.a 00:14:28.758 SO libspdk_fuse_dispatcher.so.1.0 00:14:28.758 SYMLINK libspdk_fuse_dispatcher.so 00:14:29.324 LIB libspdk_blob.a 00:14:29.324 SO libspdk_blob.so.12.0 00:14:29.582 SYMLINK libspdk_blob.so 00:14:29.866 CC lib/blobfs/blobfs.o 00:14:29.866 CC lib/blobfs/tree.o 00:14:29.866 CC lib/lvol/lvol.o 00:14:30.433 LIB libspdk_bdev.a 00:14:30.433 SO libspdk_bdev.so.17.0 00:14:30.433 SYMLINK libspdk_bdev.so 00:14:30.433 LIB libspdk_lvol.a 00:14:30.433 SO libspdk_lvol.so.11.0 00:14:30.433 LIB libspdk_blobfs.a 00:14:30.433 CC lib/nbd/nbd.o 00:14:30.433 CC lib/nbd/nbd_rpc.o 00:14:30.433 CC lib/scsi/dev.o 00:14:30.433 CC lib/ftl/ftl_core.o 00:14:30.433 CC lib/scsi/lun.o 00:14:30.433 CC lib/nvmf/ctrlr.o 00:14:30.433 CC lib/ftl/ftl_init.o 00:14:30.433 CC lib/ublk/ublk.o 00:14:30.693 SO libspdk_blobfs.so.11.0 00:14:30.693 SYMLINK libspdk_lvol.so 00:14:30.693 CC lib/ublk/ublk_rpc.o 00:14:30.693 SYMLINK libspdk_blobfs.so 00:14:30.693 CC lib/nvmf/ctrlr_discovery.o 00:14:30.693 CC lib/scsi/port.o 00:14:30.693 CC lib/scsi/scsi.o 00:14:30.693 CC lib/scsi/scsi_bdev.o 00:14:30.693 CC lib/scsi/scsi_pr.o 00:14:30.693 CC lib/ftl/ftl_layout.o 00:14:30.693 CC lib/ftl/ftl_debug.o 00:14:30.951 CC lib/ftl/ftl_io.o 00:14:30.951 CC lib/ftl/ftl_sb.o 00:14:30.951 LIB libspdk_nbd.a 00:14:30.951 SO libspdk_nbd.so.7.0 00:14:30.951 CC lib/nvmf/ctrlr_bdev.o 00:14:30.951 CC lib/ftl/ftl_l2p.o 00:14:30.951 SYMLINK libspdk_nbd.so 00:14:30.951 CC lib/ftl/ftl_l2p_flat.o 00:14:30.951 CC lib/ftl/ftl_nv_cache.o 00:14:30.951 CC lib/ftl/ftl_band.o 00:14:31.210 CC lib/ftl/ftl_band_ops.o 00:14:31.210 CC lib/scsi/scsi_rpc.o 00:14:31.210 CC lib/ftl/ftl_writer.o 00:14:31.210 LIB libspdk_ublk.a 00:14:31.210 SO libspdk_ublk.so.3.0 00:14:31.210 CC lib/scsi/task.o 00:14:31.210 CC lib/ftl/ftl_rq.o 00:14:31.210 CC lib/nvmf/subsystem.o 00:14:31.210 SYMLINK libspdk_ublk.so 00:14:31.210 CC lib/nvmf/nvmf.o 00:14:31.468 CC lib/nvmf/nvmf_rpc.o 00:14:31.468 CC lib/ftl/ftl_reloc.o 00:14:31.468 CC lib/nvmf/transport.o 00:14:31.468 LIB libspdk_scsi.a 00:14:31.468 SO libspdk_scsi.so.9.0 00:14:31.468 CC lib/ftl/ftl_l2p_cache.o 00:14:31.468 CC lib/nvmf/tcp.o 00:14:31.468 SYMLINK libspdk_scsi.so 00:14:31.468 CC lib/ftl/ftl_p2l.o 00:14:32.033 CC lib/ftl/ftl_p2l_log.o 00:14:32.033 CC lib/iscsi/conn.o 00:14:32.033 CC lib/ftl/mngt/ftl_mngt.o 00:14:32.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:14:32.033 CC lib/nvmf/stubs.o 00:14:32.033 CC lib/nvmf/mdns_server.o 00:14:32.033 CC lib/nvmf/rdma.o 00:14:32.289 CC lib/nvmf/auth.o 00:14:32.289 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:14:32.289 CC lib/ftl/mngt/ftl_mngt_startup.o 00:14:32.289 CC lib/ftl/mngt/ftl_mngt_md.o 00:14:32.289 CC lib/ftl/mngt/ftl_mngt_misc.o 00:14:32.289 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_band.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:32.546 CC lib/iscsi/init_grp.o 00:14:32.546 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:32.546 CC lib/ftl/utils/ftl_conf.o 00:14:32.546 CC lib/iscsi/iscsi.o 00:14:32.804 CC lib/vhost/vhost.o 00:14:32.804 CC lib/iscsi/param.o 00:14:32.804 CC lib/iscsi/portal_grp.o 00:14:32.804 CC lib/ftl/utils/ftl_md.o 00:14:32.804 CC lib/iscsi/tgt_node.o 00:14:32.804 CC lib/ftl/utils/ftl_mempool.o 00:14:33.061 CC lib/ftl/utils/ftl_bitmap.o 00:14:33.061 CC lib/vhost/vhost_rpc.o 00:14:33.061 CC lib/ftl/utils/ftl_property.o 00:14:33.061 CC lib/iscsi/iscsi_subsystem.o 00:14:33.061 CC lib/iscsi/iscsi_rpc.o 00:14:33.061 CC lib/iscsi/task.o 00:14:33.318 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:33.318 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:33.318 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:33.318 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:33.318 CC lib/vhost/vhost_scsi.o 00:14:33.318 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:33.318 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:33.578 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:14:33.578 CC lib/vhost/vhost_blk.o 00:14:33.578 CC lib/vhost/rte_vhost_user.o 00:14:33.578 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:33.578 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:33.578 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:33.578 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:33.578 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:14:33.578 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:14:33.836 CC lib/ftl/base/ftl_base_dev.o 00:14:33.836 LIB libspdk_iscsi.a 00:14:33.836 CC lib/ftl/base/ftl_base_bdev.o 00:14:33.836 SO libspdk_iscsi.so.8.0 00:14:33.836 CC lib/ftl/ftl_trace.o 00:14:34.094 SYMLINK libspdk_iscsi.so 00:14:34.094 LIB libspdk_ftl.a 00:14:34.351 LIB libspdk_nvmf.a 00:14:34.351 SO libspdk_ftl.so.9.0 00:14:34.351 LIB libspdk_vhost.a 00:14:34.351 SO libspdk_nvmf.so.20.0 00:14:34.351 SO libspdk_vhost.so.8.0 00:14:34.608 SYMLINK libspdk_ftl.so 00:14:34.608 SYMLINK libspdk_vhost.so 00:14:34.608 SYMLINK libspdk_nvmf.so 00:14:34.865 CC module/env_dpdk/env_dpdk_rpc.o 00:14:34.866 CC module/sock/posix/posix.o 00:14:34.866 CC module/blob/bdev/blob_bdev.o 00:14:34.866 CC module/accel/error/accel_error.o 00:14:34.866 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:34.866 CC module/keyring/linux/keyring.o 00:14:34.866 CC module/accel/ioat/accel_ioat.o 00:14:34.866 CC module/fsdev/aio/fsdev_aio.o 00:14:34.866 CC module/accel/dsa/accel_dsa.o 00:14:34.866 CC module/keyring/file/keyring.o 00:14:34.866 LIB libspdk_env_dpdk_rpc.a 00:14:34.866 SO libspdk_env_dpdk_rpc.so.6.0 00:14:35.123 SYMLINK libspdk_env_dpdk_rpc.so 00:14:35.123 CC module/keyring/file/keyring_rpc.o 00:14:35.123 CC module/keyring/linux/keyring_rpc.o 00:14:35.123 LIB libspdk_scheduler_dynamic.a 00:14:35.123 SO libspdk_scheduler_dynamic.so.4.0 00:14:35.123 CC module/fsdev/aio/fsdev_aio_rpc.o 00:14:35.123 CC module/accel/ioat/accel_ioat_rpc.o 00:14:35.123 CC module/accel/error/accel_error_rpc.o 00:14:35.123 LIB libspdk_keyring_file.a 00:14:35.123 SYMLINK libspdk_scheduler_dynamic.so 00:14:35.123 LIB libspdk_blob_bdev.a 00:14:35.123 SO libspdk_keyring_file.so.2.0 00:14:35.123 SO libspdk_blob_bdev.so.12.0 00:14:35.123 CC module/accel/dsa/accel_dsa_rpc.o 00:14:35.123 LIB libspdk_keyring_linux.a 00:14:35.123 SYMLINK libspdk_keyring_file.so 00:14:35.123 CC module/fsdev/aio/linux_aio_mgr.o 00:14:35.123 SO libspdk_keyring_linux.so.1.0 00:14:35.123 SYMLINK libspdk_blob_bdev.so 00:14:35.123 LIB libspdk_accel_ioat.a 00:14:35.123 SYMLINK libspdk_keyring_linux.so 00:14:35.123 SO libspdk_accel_ioat.so.6.0 00:14:35.123 LIB libspdk_accel_error.a 00:14:35.123 LIB libspdk_accel_dsa.a 00:14:35.380 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:35.380 SO libspdk_accel_error.so.2.0 00:14:35.380 SO libspdk_accel_dsa.so.5.0 00:14:35.380 SYMLINK libspdk_accel_ioat.so 00:14:35.380 SYMLINK libspdk_accel_error.so 00:14:35.380 SYMLINK libspdk_accel_dsa.so 00:14:35.380 CC module/accel/iaa/accel_iaa.o 00:14:35.380 CC module/scheduler/gscheduler/gscheduler.o 00:14:35.380 LIB libspdk_scheduler_dpdk_governor.a 00:14:35.380 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:35.380 CC module/bdev/error/vbdev_error.o 00:14:35.380 CC module/bdev/delay/vbdev_delay.o 00:14:35.380 CC module/bdev/gpt/gpt.o 00:14:35.380 LIB libspdk_fsdev_aio.a 00:14:35.380 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:35.380 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:35.380 CC module/blobfs/bdev/blobfs_bdev.o 00:14:35.380 CC module/bdev/lvol/vbdev_lvol.o 00:14:35.637 LIB libspdk_scheduler_gscheduler.a 00:14:35.637 SO libspdk_fsdev_aio.so.1.0 00:14:35.637 SO libspdk_scheduler_gscheduler.so.4.0 00:14:35.637 CC module/accel/iaa/accel_iaa_rpc.o 00:14:35.637 LIB libspdk_sock_posix.a 00:14:35.637 SYMLINK libspdk_scheduler_gscheduler.so 00:14:35.637 SYMLINK libspdk_fsdev_aio.so 00:14:35.637 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:35.637 CC module/bdev/gpt/vbdev_gpt.o 00:14:35.637 SO libspdk_sock_posix.so.6.0 00:14:35.637 SYMLINK libspdk_sock_posix.so 00:14:35.637 LIB libspdk_accel_iaa.a 00:14:35.637 CC module/bdev/error/vbdev_error_rpc.o 00:14:35.637 SO libspdk_accel_iaa.so.3.0 00:14:35.637 LIB libspdk_blobfs_bdev.a 00:14:35.637 CC module/bdev/malloc/bdev_malloc.o 00:14:35.637 CC module/bdev/null/bdev_null.o 00:14:35.637 SO libspdk_blobfs_bdev.so.6.0 00:14:35.637 CC module/bdev/nvme/bdev_nvme.o 00:14:35.637 LIB libspdk_bdev_delay.a 00:14:35.895 SYMLINK libspdk_accel_iaa.so 00:14:35.895 SO libspdk_bdev_delay.so.6.0 00:14:35.895 CC module/bdev/passthru/vbdev_passthru.o 00:14:35.895 CC module/bdev/null/bdev_null_rpc.o 00:14:35.895 SYMLINK libspdk_blobfs_bdev.so 00:14:35.895 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:35.895 SYMLINK libspdk_bdev_delay.so 00:14:35.895 CC module/bdev/nvme/nvme_rpc.o 00:14:35.895 LIB libspdk_bdev_error.a 00:14:35.895 LIB libspdk_bdev_gpt.a 00:14:35.895 SO libspdk_bdev_error.so.6.0 00:14:35.895 SO libspdk_bdev_gpt.so.6.0 00:14:35.895 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:35.895 SYMLINK libspdk_bdev_error.so 00:14:35.895 SYMLINK libspdk_bdev_gpt.so 00:14:35.895 CC module/bdev/nvme/bdev_mdns_client.o 00:14:35.895 CC module/bdev/nvme/vbdev_opal.o 00:14:35.895 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:35.895 LIB libspdk_bdev_null.a 00:14:35.895 SO libspdk_bdev_null.so.6.0 00:14:36.153 SYMLINK libspdk_bdev_null.so 00:14:36.153 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:36.153 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:36.153 LIB libspdk_bdev_passthru.a 00:14:36.153 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:36.153 SO libspdk_bdev_passthru.so.6.0 00:14:36.153 SYMLINK libspdk_bdev_passthru.so 00:14:36.153 LIB libspdk_bdev_lvol.a 00:14:36.153 CC module/bdev/split/vbdev_split.o 00:14:36.153 CC module/bdev/raid/bdev_raid.o 00:14:36.153 LIB libspdk_bdev_malloc.a 00:14:36.153 SO libspdk_bdev_lvol.so.6.0 00:14:36.153 CC module/bdev/raid/bdev_raid_rpc.o 00:14:36.153 SO libspdk_bdev_malloc.so.6.0 00:14:36.411 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:36.411 SYMLINK libspdk_bdev_lvol.so 00:14:36.411 CC module/bdev/raid/bdev_raid_sb.o 00:14:36.411 SYMLINK libspdk_bdev_malloc.so 00:14:36.411 CC module/bdev/raid/raid0.o 00:14:36.411 CC module/bdev/aio/bdev_aio.o 00:14:36.411 CC module/bdev/xnvme/bdev_xnvme.o 00:14:36.411 CC module/bdev/split/vbdev_split_rpc.o 00:14:36.411 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:14:36.411 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:36.411 CC module/bdev/raid/raid1.o 00:14:36.670 CC module/bdev/raid/concat.o 00:14:36.670 LIB libspdk_bdev_split.a 00:14:36.670 SO libspdk_bdev_split.so.6.0 00:14:36.670 CC module/bdev/aio/bdev_aio_rpc.o 00:14:36.670 LIB libspdk_bdev_zone_block.a 00:14:36.670 LIB libspdk_bdev_xnvme.a 00:14:36.670 SYMLINK libspdk_bdev_split.so 00:14:36.670 SO libspdk_bdev_zone_block.so.6.0 00:14:36.670 SO libspdk_bdev_xnvme.so.3.0 00:14:36.670 SYMLINK libspdk_bdev_zone_block.so 00:14:36.670 SYMLINK libspdk_bdev_xnvme.so 00:14:36.670 LIB libspdk_bdev_aio.a 00:14:36.928 CC module/bdev/ftl/bdev_ftl.o 00:14:36.928 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:36.928 SO libspdk_bdev_aio.so.6.0 00:14:36.928 CC module/bdev/iscsi/bdev_iscsi.o 00:14:36.928 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:36.928 SYMLINK libspdk_bdev_aio.so 00:14:36.928 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:36.928 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:36.928 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:36.928 LIB libspdk_bdev_raid.a 00:14:37.187 SO libspdk_bdev_raid.so.6.0 00:14:37.187 LIB libspdk_bdev_ftl.a 00:14:37.187 SO libspdk_bdev_ftl.so.6.0 00:14:37.187 SYMLINK libspdk_bdev_raid.so 00:14:37.187 SYMLINK libspdk_bdev_ftl.so 00:14:37.187 LIB libspdk_bdev_iscsi.a 00:14:37.187 SO libspdk_bdev_iscsi.so.6.0 00:14:37.187 SYMLINK libspdk_bdev_iscsi.so 00:14:37.452 LIB libspdk_bdev_virtio.a 00:14:37.452 SO libspdk_bdev_virtio.so.6.0 00:14:37.452 SYMLINK libspdk_bdev_virtio.so 00:14:38.023 LIB libspdk_bdev_nvme.a 00:14:38.023 SO libspdk_bdev_nvme.so.7.1 00:14:38.023 SYMLINK libspdk_bdev_nvme.so 00:14:38.281 CC module/event/subsystems/iobuf/iobuf.o 00:14:38.281 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:38.281 CC module/event/subsystems/fsdev/fsdev.o 00:14:38.281 CC module/event/subsystems/vmd/vmd.o 00:14:38.281 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:38.281 CC module/event/subsystems/sock/sock.o 00:14:38.281 CC module/event/subsystems/keyring/keyring.o 00:14:38.281 CC module/event/subsystems/scheduler/scheduler.o 00:14:38.281 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:38.538 LIB libspdk_event_vmd.a 00:14:38.538 LIB libspdk_event_fsdev.a 00:14:38.538 LIB libspdk_event_sock.a 00:14:38.539 LIB libspdk_event_keyring.a 00:14:38.539 LIB libspdk_event_scheduler.a 00:14:38.539 LIB libspdk_event_iobuf.a 00:14:38.539 SO libspdk_event_vmd.so.6.0 00:14:38.539 LIB libspdk_event_vhost_blk.a 00:14:38.539 SO libspdk_event_fsdev.so.1.0 00:14:38.539 SO libspdk_event_keyring.so.1.0 00:14:38.539 SO libspdk_event_scheduler.so.4.0 00:14:38.539 SO libspdk_event_sock.so.5.0 00:14:38.539 SO libspdk_event_iobuf.so.3.0 00:14:38.539 SO libspdk_event_vhost_blk.so.3.0 00:14:38.539 SYMLINK libspdk_event_vmd.so 00:14:38.539 SYMLINK libspdk_event_keyring.so 00:14:38.539 SYMLINK libspdk_event_scheduler.so 00:14:38.539 SYMLINK libspdk_event_fsdev.so 00:14:38.539 SYMLINK libspdk_event_sock.so 00:14:38.539 SYMLINK libspdk_event_iobuf.so 00:14:38.539 SYMLINK libspdk_event_vhost_blk.so 00:14:38.796 CC module/event/subsystems/accel/accel.o 00:14:38.796 LIB libspdk_event_accel.a 00:14:39.054 SO libspdk_event_accel.so.6.0 00:14:39.054 SYMLINK libspdk_event_accel.so 00:14:39.312 CC module/event/subsystems/bdev/bdev.o 00:14:39.312 LIB libspdk_event_bdev.a 00:14:39.312 SO libspdk_event_bdev.so.6.0 00:14:39.312 SYMLINK libspdk_event_bdev.so 00:14:39.570 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:39.570 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:39.570 CC module/event/subsystems/ublk/ublk.o 00:14:39.570 CC module/event/subsystems/scsi/scsi.o 00:14:39.570 CC module/event/subsystems/nbd/nbd.o 00:14:39.827 LIB libspdk_event_nbd.a 00:14:39.827 LIB libspdk_event_ublk.a 00:14:39.827 LIB libspdk_event_scsi.a 00:14:39.827 SO libspdk_event_nbd.so.6.0 00:14:39.827 SO libspdk_event_ublk.so.3.0 00:14:39.827 LIB libspdk_event_nvmf.a 00:14:39.827 SO libspdk_event_scsi.so.6.0 00:14:39.827 SYMLINK libspdk_event_nbd.so 00:14:39.827 SO libspdk_event_nvmf.so.6.0 00:14:39.827 SYMLINK libspdk_event_ublk.so 00:14:39.827 SYMLINK libspdk_event_scsi.so 00:14:39.827 SYMLINK libspdk_event_nvmf.so 00:14:40.085 CC module/event/subsystems/iscsi/iscsi.o 00:14:40.085 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:40.085 LIB libspdk_event_iscsi.a 00:14:40.085 LIB libspdk_event_vhost_scsi.a 00:14:40.085 SO libspdk_event_vhost_scsi.so.3.0 00:14:40.085 SO libspdk_event_iscsi.so.6.0 00:14:40.085 SYMLINK libspdk_event_vhost_scsi.so 00:14:40.085 SYMLINK libspdk_event_iscsi.so 00:14:40.342 SO libspdk.so.6.0 00:14:40.342 SYMLINK libspdk.so 00:14:40.599 CXX app/trace/trace.o 00:14:40.599 CC app/spdk_lspci/spdk_lspci.o 00:14:40.599 CC app/trace_record/trace_record.o 00:14:40.599 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:40.599 CC app/nvmf_tgt/nvmf_main.o 00:14:40.599 CC app/spdk_tgt/spdk_tgt.o 00:14:40.599 CC app/iscsi_tgt/iscsi_tgt.o 00:14:40.599 CC examples/ioat/perf/perf.o 00:14:40.599 CC examples/util/zipf/zipf.o 00:14:40.599 CC test/thread/poller_perf/poller_perf.o 00:14:40.599 LINK spdk_lspci 00:14:40.599 LINK nvmf_tgt 00:14:40.599 LINK poller_perf 00:14:40.599 LINK interrupt_tgt 00:14:40.599 LINK iscsi_tgt 00:14:40.599 LINK spdk_tgt 00:14:40.599 LINK zipf 00:14:40.857 LINK spdk_trace_record 00:14:40.857 LINK ioat_perf 00:14:40.857 CC app/spdk_nvme_perf/perf.o 00:14:40.857 LINK spdk_trace 00:14:40.857 CC examples/ioat/verify/verify.o 00:14:40.857 CC app/spdk_nvme_identify/identify.o 00:14:41.114 CC test/dma/test_dma/test_dma.o 00:14:41.114 CC examples/sock/hello_world/hello_sock.o 00:14:41.114 CC examples/thread/thread/thread_ex.o 00:14:41.114 CC examples/vmd/lsvmd/lsvmd.o 00:14:41.114 CC test/app/bdev_svc/bdev_svc.o 00:14:41.114 CC app/spdk_nvme_discover/discovery_aer.o 00:14:41.114 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:41.114 LINK verify 00:14:41.114 LINK lsvmd 00:14:41.114 LINK bdev_svc 00:14:41.114 LINK hello_sock 00:14:41.114 LINK thread 00:14:41.114 LINK spdk_nvme_discover 00:14:41.462 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:41.462 CC examples/vmd/led/led.o 00:14:41.462 CC test/app/histogram_perf/histogram_perf.o 00:14:41.462 CC test/app/jsoncat/jsoncat.o 00:14:41.462 LINK led 00:14:41.462 CC test/app/stub/stub.o 00:14:41.462 LINK nvme_fuzz 00:14:41.462 LINK test_dma 00:14:41.462 CC examples/idxd/perf/perf.o 00:14:41.735 LINK histogram_perf 00:14:41.735 LINK jsoncat 00:14:41.735 LINK stub 00:14:41.735 LINK spdk_nvme_perf 00:14:41.735 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:41.735 TEST_HEADER include/spdk/accel.h 00:14:41.735 TEST_HEADER include/spdk/accel_module.h 00:14:41.735 TEST_HEADER include/spdk/assert.h 00:14:41.735 TEST_HEADER include/spdk/barrier.h 00:14:41.735 TEST_HEADER include/spdk/base64.h 00:14:41.735 TEST_HEADER include/spdk/bdev.h 00:14:41.735 TEST_HEADER include/spdk/bdev_module.h 00:14:41.735 TEST_HEADER include/spdk/bdev_zone.h 00:14:41.735 TEST_HEADER include/spdk/bit_array.h 00:14:41.735 CC examples/fsdev/hello_world/hello_fsdev.o 00:14:41.735 TEST_HEADER include/spdk/bit_pool.h 00:14:41.735 TEST_HEADER include/spdk/blob_bdev.h 00:14:41.735 CC app/spdk_top/spdk_top.o 00:14:41.735 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:41.735 TEST_HEADER include/spdk/blobfs.h 00:14:41.735 TEST_HEADER include/spdk/blob.h 00:14:41.735 TEST_HEADER include/spdk/conf.h 00:14:41.735 TEST_HEADER include/spdk/config.h 00:14:41.735 TEST_HEADER include/spdk/cpuset.h 00:14:41.735 TEST_HEADER include/spdk/crc16.h 00:14:41.735 TEST_HEADER include/spdk/crc32.h 00:14:41.735 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:41.735 TEST_HEADER include/spdk/crc64.h 00:14:41.735 TEST_HEADER include/spdk/dif.h 00:14:41.735 TEST_HEADER include/spdk/dma.h 00:14:41.735 TEST_HEADER include/spdk/endian.h 00:14:41.735 TEST_HEADER include/spdk/env_dpdk.h 00:14:41.735 TEST_HEADER include/spdk/env.h 00:14:41.735 TEST_HEADER include/spdk/event.h 00:14:41.736 TEST_HEADER include/spdk/fd_group.h 00:14:41.736 TEST_HEADER include/spdk/fd.h 00:14:41.736 TEST_HEADER include/spdk/file.h 00:14:41.736 TEST_HEADER include/spdk/fsdev.h 00:14:41.736 LINK spdk_nvme_identify 00:14:41.736 TEST_HEADER include/spdk/fsdev_module.h 00:14:41.736 TEST_HEADER include/spdk/ftl.h 00:14:41.736 TEST_HEADER include/spdk/fuse_dispatcher.h 00:14:41.736 TEST_HEADER include/spdk/gpt_spec.h 00:14:41.736 TEST_HEADER include/spdk/hexlify.h 00:14:41.736 TEST_HEADER include/spdk/histogram_data.h 00:14:41.736 TEST_HEADER include/spdk/idxd.h 00:14:41.736 TEST_HEADER include/spdk/idxd_spec.h 00:14:41.736 TEST_HEADER include/spdk/init.h 00:14:41.736 TEST_HEADER include/spdk/ioat.h 00:14:41.736 TEST_HEADER include/spdk/ioat_spec.h 00:14:41.736 TEST_HEADER include/spdk/iscsi_spec.h 00:14:41.736 TEST_HEADER include/spdk/json.h 00:14:41.736 TEST_HEADER include/spdk/jsonrpc.h 00:14:41.736 TEST_HEADER include/spdk/keyring.h 00:14:41.736 TEST_HEADER include/spdk/keyring_module.h 00:14:41.736 TEST_HEADER include/spdk/likely.h 00:14:41.736 TEST_HEADER include/spdk/log.h 00:14:41.736 TEST_HEADER include/spdk/lvol.h 00:14:41.736 CC examples/accel/perf/accel_perf.o 00:14:41.736 TEST_HEADER include/spdk/md5.h 00:14:41.736 TEST_HEADER include/spdk/memory.h 00:14:41.736 TEST_HEADER include/spdk/mmio.h 00:14:41.736 TEST_HEADER include/spdk/nbd.h 00:14:41.736 TEST_HEADER include/spdk/net.h 00:14:41.736 TEST_HEADER include/spdk/notify.h 00:14:41.736 TEST_HEADER include/spdk/nvme.h 00:14:41.736 TEST_HEADER include/spdk/nvme_intel.h 00:14:41.736 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:41.736 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:41.736 TEST_HEADER include/spdk/nvme_spec.h 00:14:41.736 TEST_HEADER include/spdk/nvme_zns.h 00:14:41.736 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:41.736 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:41.736 TEST_HEADER include/spdk/nvmf.h 00:14:41.736 TEST_HEADER include/spdk/nvmf_spec.h 00:14:41.736 TEST_HEADER include/spdk/nvmf_transport.h 00:14:41.736 TEST_HEADER include/spdk/opal.h 00:14:41.736 TEST_HEADER include/spdk/opal_spec.h 00:14:41.736 TEST_HEADER include/spdk/pci_ids.h 00:14:41.736 TEST_HEADER include/spdk/pipe.h 00:14:41.736 TEST_HEADER include/spdk/queue.h 00:14:41.736 TEST_HEADER include/spdk/reduce.h 00:14:41.736 TEST_HEADER include/spdk/rpc.h 00:14:41.736 TEST_HEADER include/spdk/scheduler.h 00:14:41.736 TEST_HEADER include/spdk/scsi.h 00:14:41.736 TEST_HEADER include/spdk/scsi_spec.h 00:14:41.736 TEST_HEADER include/spdk/sock.h 00:14:41.736 TEST_HEADER include/spdk/stdinc.h 00:14:41.736 TEST_HEADER include/spdk/string.h 00:14:41.736 TEST_HEADER include/spdk/thread.h 00:14:41.736 LINK idxd_perf 00:14:41.736 TEST_HEADER include/spdk/trace.h 00:14:41.736 TEST_HEADER include/spdk/trace_parser.h 00:14:41.736 TEST_HEADER include/spdk/tree.h 00:14:41.994 TEST_HEADER include/spdk/ublk.h 00:14:41.994 TEST_HEADER include/spdk/util.h 00:14:41.994 TEST_HEADER include/spdk/uuid.h 00:14:41.994 TEST_HEADER include/spdk/version.h 00:14:41.994 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:41.994 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:41.994 TEST_HEADER include/spdk/vhost.h 00:14:41.994 TEST_HEADER include/spdk/vmd.h 00:14:41.994 TEST_HEADER include/spdk/xor.h 00:14:41.994 TEST_HEADER include/spdk/zipf.h 00:14:41.994 CXX test/cpp_headers/accel.o 00:14:41.994 CXX test/cpp_headers/accel_module.o 00:14:41.994 CC app/spdk_dd/spdk_dd.o 00:14:41.994 CC app/vhost/vhost.o 00:14:41.994 LINK hello_fsdev 00:14:41.994 CXX test/cpp_headers/assert.o 00:14:41.994 LINK vhost_fuzz 00:14:42.252 LINK vhost 00:14:42.252 CC examples/blob/hello_world/hello_blob.o 00:14:42.252 CC examples/nvme/hello_world/hello_world.o 00:14:42.252 CXX test/cpp_headers/barrier.o 00:14:42.252 CC examples/blob/cli/blobcli.o 00:14:42.252 LINK accel_perf 00:14:42.252 CC app/fio/nvme/fio_plugin.o 00:14:42.252 LINK spdk_dd 00:14:42.252 LINK hello_blob 00:14:42.509 CXX test/cpp_headers/base64.o 00:14:42.509 LINK hello_world 00:14:42.509 CC app/fio/bdev/fio_plugin.o 00:14:42.509 CXX test/cpp_headers/bdev.o 00:14:42.509 CC examples/nvme/reconnect/reconnect.o 00:14:42.509 CC test/env/vtophys/vtophys.o 00:14:42.767 CC test/env/mem_callbacks/mem_callbacks.o 00:14:42.767 LINK spdk_top 00:14:42.767 CXX test/cpp_headers/bdev_module.o 00:14:42.767 LINK iscsi_fuzz 00:14:42.767 CC examples/bdev/hello_world/hello_bdev.o 00:14:42.767 LINK vtophys 00:14:42.767 LINK blobcli 00:14:42.767 CXX test/cpp_headers/bdev_zone.o 00:14:42.767 CXX test/cpp_headers/bit_array.o 00:14:42.767 CC examples/bdev/bdevperf/bdevperf.o 00:14:43.025 LINK spdk_nvme 00:14:43.025 LINK spdk_bdev 00:14:43.025 LINK reconnect 00:14:43.025 LINK hello_bdev 00:14:43.025 CC test/event/event_perf/event_perf.o 00:14:43.025 CXX test/cpp_headers/bit_pool.o 00:14:43.025 CC test/event/reactor/reactor.o 00:14:43.025 CC test/event/reactor_perf/reactor_perf.o 00:14:43.025 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:43.025 LINK event_perf 00:14:43.025 CC examples/nvme/arbitration/arbitration.o 00:14:43.025 LINK reactor 00:14:43.025 CXX test/cpp_headers/blob_bdev.o 00:14:43.282 LINK mem_callbacks 00:14:43.282 CC test/event/app_repeat/app_repeat.o 00:14:43.282 CC test/event/scheduler/scheduler.o 00:14:43.282 CXX test/cpp_headers/blobfs_bdev.o 00:14:43.282 CXX test/cpp_headers/blobfs.o 00:14:43.282 LINK reactor_perf 00:14:43.282 LINK app_repeat 00:14:43.282 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:43.282 LINK scheduler 00:14:43.282 CXX test/cpp_headers/blob.o 00:14:43.546 CXX test/cpp_headers/conf.o 00:14:43.546 LINK arbitration 00:14:43.546 CC test/nvme/aer/aer.o 00:14:43.546 CC test/nvme/reset/reset.o 00:14:43.546 LINK env_dpdk_post_init 00:14:43.546 CC examples/nvme/hotplug/hotplug.o 00:14:43.546 CXX test/cpp_headers/config.o 00:14:43.546 LINK nvme_manage 00:14:43.546 CXX test/cpp_headers/cpuset.o 00:14:43.546 CC test/rpc_client/rpc_client_test.o 00:14:43.546 CC test/nvme/sgl/sgl.o 00:14:43.546 CC test/nvme/e2edp/nvme_dp.o 00:14:43.805 CC test/env/memory/memory_ut.o 00:14:43.805 LINK reset 00:14:43.805 CXX test/cpp_headers/crc16.o 00:14:43.805 LINK bdevperf 00:14:43.805 LINK hotplug 00:14:43.805 LINK aer 00:14:43.805 LINK rpc_client_test 00:14:43.805 CC test/env/pci/pci_ut.o 00:14:43.805 CXX test/cpp_headers/crc32.o 00:14:43.805 LINK nvme_dp 00:14:43.805 LINK sgl 00:14:43.805 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:44.062 CC examples/nvme/abort/abort.o 00:14:44.062 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:44.062 CXX test/cpp_headers/crc64.o 00:14:44.062 CC test/accel/dif/dif.o 00:14:44.062 CC test/nvme/overhead/overhead.o 00:14:44.062 LINK cmb_copy 00:14:44.062 CC test/blobfs/mkfs/mkfs.o 00:14:44.062 LINK pmr_persistence 00:14:44.062 CXX test/cpp_headers/dif.o 00:14:44.319 LINK pci_ut 00:14:44.319 CC test/lvol/esnap/esnap.o 00:14:44.319 CXX test/cpp_headers/dma.o 00:14:44.319 CXX test/cpp_headers/endian.o 00:14:44.319 LINK mkfs 00:14:44.319 CC test/nvme/err_injection/err_injection.o 00:14:44.319 LINK abort 00:14:44.319 LINK overhead 00:14:44.319 CXX test/cpp_headers/env_dpdk.o 00:14:44.319 CXX test/cpp_headers/env.o 00:14:44.319 CXX test/cpp_headers/event.o 00:14:44.627 LINK err_injection 00:14:44.627 CC test/nvme/startup/startup.o 00:14:44.627 CXX test/cpp_headers/fd_group.o 00:14:44.627 CXX test/cpp_headers/fd.o 00:14:44.627 CC test/nvme/reserve/reserve.o 00:14:44.627 CXX test/cpp_headers/file.o 00:14:44.627 CC examples/nvmf/nvmf/nvmf.o 00:14:44.627 CC test/nvme/simple_copy/simple_copy.o 00:14:44.627 LINK startup 00:14:44.627 CXX test/cpp_headers/fsdev.o 00:14:44.627 CC test/nvme/connect_stress/connect_stress.o 00:14:44.889 LINK dif 00:14:44.889 CXX test/cpp_headers/fsdev_module.o 00:14:44.889 LINK reserve 00:14:44.889 LINK memory_ut 00:14:44.889 LINK connect_stress 00:14:44.889 CC test/nvme/boot_partition/boot_partition.o 00:14:44.889 LINK simple_copy 00:14:44.889 CXX test/cpp_headers/ftl.o 00:14:44.889 CC test/nvme/compliance/nvme_compliance.o 00:14:44.889 LINK nvmf 00:14:44.889 CC test/nvme/fused_ordering/fused_ordering.o 00:14:44.889 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:45.169 CC test/nvme/fdp/fdp.o 00:14:45.169 CC test/nvme/cuse/cuse.o 00:14:45.169 LINK boot_partition 00:14:45.169 CXX test/cpp_headers/fuse_dispatcher.o 00:14:45.169 CXX test/cpp_headers/gpt_spec.o 00:14:45.169 LINK doorbell_aers 00:14:45.169 LINK fused_ordering 00:14:45.169 CXX test/cpp_headers/hexlify.o 00:14:45.169 CC test/bdev/bdevio/bdevio.o 00:14:45.169 CXX test/cpp_headers/histogram_data.o 00:14:45.169 CXX test/cpp_headers/idxd.o 00:14:45.169 LINK nvme_compliance 00:14:45.169 CXX test/cpp_headers/idxd_spec.o 00:14:45.427 CXX test/cpp_headers/init.o 00:14:45.427 CXX test/cpp_headers/ioat.o 00:14:45.427 LINK fdp 00:14:45.427 CXX test/cpp_headers/ioat_spec.o 00:14:45.427 CXX test/cpp_headers/iscsi_spec.o 00:14:45.427 CXX test/cpp_headers/json.o 00:14:45.427 CXX test/cpp_headers/jsonrpc.o 00:14:45.427 CXX test/cpp_headers/keyring.o 00:14:45.427 CXX test/cpp_headers/keyring_module.o 00:14:45.427 CXX test/cpp_headers/likely.o 00:14:45.427 CXX test/cpp_headers/log.o 00:14:45.427 CXX test/cpp_headers/lvol.o 00:14:45.427 CXX test/cpp_headers/md5.o 00:14:45.685 LINK bdevio 00:14:45.685 CXX test/cpp_headers/memory.o 00:14:45.685 CXX test/cpp_headers/mmio.o 00:14:45.685 CXX test/cpp_headers/nbd.o 00:14:45.685 CXX test/cpp_headers/net.o 00:14:45.685 CXX test/cpp_headers/notify.o 00:14:45.685 CXX test/cpp_headers/nvme.o 00:14:45.685 CXX test/cpp_headers/nvme_intel.o 00:14:45.685 CXX test/cpp_headers/nvme_ocssd.o 00:14:45.685 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:45.685 CXX test/cpp_headers/nvme_spec.o 00:14:45.685 CXX test/cpp_headers/nvme_zns.o 00:14:45.685 CXX test/cpp_headers/nvmf_cmd.o 00:14:45.685 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:45.685 CXX test/cpp_headers/nvmf.o 00:14:45.685 CXX test/cpp_headers/nvmf_spec.o 00:14:45.943 CXX test/cpp_headers/nvmf_transport.o 00:14:45.943 CXX test/cpp_headers/opal.o 00:14:45.943 CXX test/cpp_headers/opal_spec.o 00:14:45.943 CXX test/cpp_headers/pci_ids.o 00:14:45.943 CXX test/cpp_headers/pipe.o 00:14:45.943 CXX test/cpp_headers/queue.o 00:14:45.943 CXX test/cpp_headers/reduce.o 00:14:45.943 CXX test/cpp_headers/rpc.o 00:14:45.943 CXX test/cpp_headers/scheduler.o 00:14:45.943 CXX test/cpp_headers/scsi.o 00:14:45.943 CXX test/cpp_headers/scsi_spec.o 00:14:45.943 CXX test/cpp_headers/sock.o 00:14:45.943 CXX test/cpp_headers/stdinc.o 00:14:45.943 CXX test/cpp_headers/string.o 00:14:45.943 CXX test/cpp_headers/thread.o 00:14:46.200 CXX test/cpp_headers/trace.o 00:14:46.200 CXX test/cpp_headers/trace_parser.o 00:14:46.200 CXX test/cpp_headers/tree.o 00:14:46.200 CXX test/cpp_headers/ublk.o 00:14:46.200 CXX test/cpp_headers/util.o 00:14:46.200 CXX test/cpp_headers/uuid.o 00:14:46.200 CXX test/cpp_headers/version.o 00:14:46.200 CXX test/cpp_headers/vfio_user_pci.o 00:14:46.200 CXX test/cpp_headers/vfio_user_spec.o 00:14:46.200 CXX test/cpp_headers/vhost.o 00:14:46.200 CXX test/cpp_headers/vmd.o 00:14:46.200 CXX test/cpp_headers/xor.o 00:14:46.200 LINK cuse 00:14:46.200 CXX test/cpp_headers/zipf.o 00:14:49.480 LINK esnap 00:14:49.480 00:14:49.480 real 1m7.279s 00:14:49.480 user 6m21.466s 00:14:49.480 sys 1m5.363s 00:14:49.480 04:36:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:49.480 04:36:56 make -- common/autotest_common.sh@10 -- $ set +x 00:14:49.480 ************************************ 00:14:49.480 END TEST make 00:14:49.480 ************************************ 00:14:49.480 04:36:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:49.480 04:36:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:49.480 04:36:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:49.480 04:36:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:49.480 04:36:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:49.480 04:36:56 -- pm/common@44 -- $ pid=5073 00:14:49.480 04:36:56 -- pm/common@50 -- $ kill -TERM 5073 00:14:49.480 04:36:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:49.480 04:36:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:49.480 04:36:56 -- pm/common@44 -- $ pid=5074 00:14:49.480 04:36:56 -- pm/common@50 -- $ kill -TERM 5074 00:14:49.480 04:36:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:14:49.480 04:36:56 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:49.480 04:36:56 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:49.480 04:36:56 -- common/autotest_common.sh@1693 -- # lcov --version 00:14:49.480 04:36:56 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:49.738 04:36:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:49.738 04:36:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.738 04:36:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.738 04:36:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.738 04:36:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.738 04:36:56 -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.738 04:36:56 -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.738 04:36:56 -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.738 04:36:56 -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.738 04:36:56 -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.738 04:36:56 -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.738 04:36:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.738 04:36:56 -- scripts/common.sh@344 -- # case "$op" in 00:14:49.738 04:36:56 -- scripts/common.sh@345 -- # : 1 00:14:49.738 04:36:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.738 04:36:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.738 04:36:56 -- scripts/common.sh@365 -- # decimal 1 00:14:49.738 04:36:56 -- scripts/common.sh@353 -- # local d=1 00:14:49.738 04:36:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.738 04:36:56 -- scripts/common.sh@355 -- # echo 1 00:14:49.739 04:36:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.739 04:36:56 -- scripts/common.sh@366 -- # decimal 2 00:14:49.739 04:36:56 -- scripts/common.sh@353 -- # local d=2 00:14:49.739 04:36:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.739 04:36:56 -- scripts/common.sh@355 -- # echo 2 00:14:49.739 04:36:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.739 04:36:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.739 04:36:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.739 04:36:56 -- scripts/common.sh@368 -- # return 0 00:14:49.739 04:36:56 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.739 04:36:56 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:49.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.739 --rc genhtml_branch_coverage=1 00:14:49.739 --rc genhtml_function_coverage=1 00:14:49.739 --rc genhtml_legend=1 00:14:49.739 --rc geninfo_all_blocks=1 00:14:49.739 --rc geninfo_unexecuted_blocks=1 00:14:49.739 00:14:49.739 ' 00:14:49.739 04:36:56 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:49.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.739 --rc genhtml_branch_coverage=1 00:14:49.739 --rc genhtml_function_coverage=1 00:14:49.739 --rc genhtml_legend=1 00:14:49.739 --rc geninfo_all_blocks=1 00:14:49.739 --rc geninfo_unexecuted_blocks=1 00:14:49.739 00:14:49.739 ' 00:14:49.739 04:36:56 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:49.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.739 --rc genhtml_branch_coverage=1 00:14:49.739 --rc genhtml_function_coverage=1 00:14:49.739 --rc genhtml_legend=1 00:14:49.739 --rc geninfo_all_blocks=1 00:14:49.739 --rc geninfo_unexecuted_blocks=1 00:14:49.739 00:14:49.739 ' 00:14:49.739 04:36:56 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:49.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.739 --rc genhtml_branch_coverage=1 00:14:49.739 --rc genhtml_function_coverage=1 00:14:49.739 --rc genhtml_legend=1 00:14:49.739 --rc geninfo_all_blocks=1 00:14:49.739 --rc geninfo_unexecuted_blocks=1 00:14:49.739 00:14:49.739 ' 00:14:49.739 04:36:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.739 04:36:56 -- nvmf/common.sh@7 -- # uname -s 00:14:49.739 04:36:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.739 04:36:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.739 04:36:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.739 04:36:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.739 04:36:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.739 04:36:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.739 04:36:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.739 04:36:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.739 04:36:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.739 04:36:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.739 04:36:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:14:49.739 04:36:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:14:49.739 04:36:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.739 04:36:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.739 04:36:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:49.739 04:36:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.739 04:36:56 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.739 04:36:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.739 04:36:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.739 04:36:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.739 04:36:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.739 04:36:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.739 04:36:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.739 04:36:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.739 04:36:56 -- paths/export.sh@5 -- # export PATH 00:14:49.739 04:36:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.739 04:36:56 -- nvmf/common.sh@51 -- # : 0 00:14:49.739 04:36:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.739 04:36:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.739 04:36:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.739 04:36:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.739 04:36:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.739 04:36:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.739 04:36:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.739 04:36:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.739 04:36:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.739 04:36:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:49.739 04:36:56 -- spdk/autotest.sh@32 -- # uname -s 00:14:49.739 04:36:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:49.739 04:36:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:49.739 04:36:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:49.739 04:36:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:49.739 04:36:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:49.739 04:36:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:49.739 04:36:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:49.739 04:36:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:49.739 04:36:56 -- spdk/autotest.sh@48 -- # udevadm_pid=54233 00:14:49.739 04:36:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:49.739 04:36:56 -- pm/common@17 -- # local monitor 00:14:49.739 04:36:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:49.739 04:36:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:49.739 04:36:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:49.739 04:36:56 -- pm/common@25 -- # sleep 1 00:14:49.739 04:36:56 -- pm/common@21 -- # date +%s 00:14:49.739 04:36:56 -- pm/common@21 -- # date +%s 00:14:49.739 04:36:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732682216 00:14:49.739 04:36:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732682216 00:14:49.739 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732682216_collect-cpu-load.pm.log 00:14:49.739 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732682216_collect-vmstat.pm.log 00:14:50.673 04:36:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:50.673 04:36:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:50.673 04:36:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.673 04:36:57 -- common/autotest_common.sh@10 -- # set +x 00:14:50.673 04:36:57 -- spdk/autotest.sh@59 -- # create_test_list 00:14:50.673 04:36:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:14:50.673 04:36:57 -- common/autotest_common.sh@10 -- # set +x 00:14:50.673 04:36:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:50.673 04:36:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:50.673 04:36:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:50.673 04:36:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:50.673 04:36:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:50.673 04:36:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:50.673 04:36:57 -- common/autotest_common.sh@1457 -- # uname 00:14:50.673 04:36:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:14:50.673 04:36:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:50.937 04:36:57 -- common/autotest_common.sh@1477 -- # uname 00:14:50.937 04:36:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:14:50.937 04:36:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:14:50.937 04:36:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:14:50.937 lcov: LCOV version 1.15 00:14:50.937 04:36:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:15:05.951 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:15:05.951 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:15:20.855 04:37:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:15:20.855 04:37:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.855 04:37:26 -- common/autotest_common.sh@10 -- # set +x 00:15:20.855 04:37:26 -- spdk/autotest.sh@78 -- # rm -f 00:15:20.855 04:37:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:20.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.855 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:20.855 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:20.855 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:20.855 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:20.855 04:37:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:15:20.855 04:37:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:20.855 04:37:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:20.855 04:37:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.855 04:37:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:20.855 04:37:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:20.855 04:37:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:20.855 No valid GPT data, bailing 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # pt= 00:15:20.855 04:37:27 -- scripts/common.sh@395 -- # return 1 00:15:20.855 04:37:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:20.855 1+0 records in 00:15:20.855 1+0 records out 00:15:20.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0420197 s, 25.0 MB/s 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:15:20.855 No valid GPT data, bailing 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # pt= 00:15:20.855 04:37:27 -- scripts/common.sh@395 -- # return 1 00:15:20.855 04:37:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:15:20.855 1+0 records in 00:15:20.855 1+0 records out 00:15:20.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533162 s, 197 MB/s 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:15:20.855 No valid GPT data, bailing 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # pt= 00:15:20.855 04:37:27 -- scripts/common.sh@395 -- # return 1 00:15:20.855 04:37:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:15:20.855 1+0 records in 00:15:20.855 1+0 records out 00:15:20.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483154 s, 217 MB/s 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:15:20.855 No valid GPT data, bailing 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # pt= 00:15:20.855 04:37:27 -- scripts/common.sh@395 -- # return 1 00:15:20.855 04:37:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:15:20.855 1+0 records in 00:15:20.855 1+0 records out 00:15:20.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557235 s, 188 MB/s 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:15:20.855 No valid GPT data, bailing 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:15:20.855 04:37:27 -- scripts/common.sh@394 -- # pt= 00:15:20.855 04:37:27 -- scripts/common.sh@395 -- # return 1 00:15:20.855 04:37:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:15:20.855 1+0 records in 00:15:20.855 1+0 records out 00:15:20.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566031 s, 185 MB/s 00:15:20.855 04:37:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:20.855 04:37:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:20.855 04:37:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:15:20.855 04:37:27 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:15:20.855 04:37:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:15:20.855 No valid GPT data, bailing 00:15:20.856 04:37:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:15:21.112 04:37:28 -- scripts/common.sh@394 -- # pt= 00:15:21.112 04:37:28 -- scripts/common.sh@395 -- # return 1 00:15:21.112 04:37:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:15:21.112 1+0 records in 00:15:21.112 1+0 records out 00:15:21.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523829 s, 200 MB/s 00:15:21.112 04:37:28 -- spdk/autotest.sh@105 -- # sync 00:15:21.112 04:37:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:21.112 04:37:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:21.112 04:37:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:23.032 04:37:29 -- spdk/autotest.sh@111 -- # uname -s 00:15:23.032 04:37:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:15:23.032 04:37:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:15:23.032 04:37:29 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:23.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:23.557 Hugepages 00:15:23.557 node hugesize free / total 00:15:23.557 node0 1048576kB 0 / 0 00:15:23.818 node0 2048kB 0 / 0 00:15:23.818 00:15:23.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:23.818 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:23.818 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:23.818 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:15:24.078 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:15:24.078 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:15:24.078 04:37:31 -- spdk/autotest.sh@117 -- # uname -s 00:15:24.078 04:37:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:15:24.078 04:37:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:15:24.078 04:37:31 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:24.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:25.221 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:25.221 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:25.221 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:25.221 04:37:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:15:26.165 04:37:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:15:26.165 04:37:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:15:26.165 04:37:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:15:26.165 04:37:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:15:26.165 04:37:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:26.165 04:37:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:15:26.165 04:37:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:26.165 04:37:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:26.165 04:37:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:26.427 04:37:33 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:26.427 04:37:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:26.427 04:37:33 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:26.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:26.743 Waiting for block devices as requested 00:15:26.743 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.016 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.016 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.016 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.311 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:32.311 04:37:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:32.311 04:37:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:15:32.311 04:37:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:15:32.311 04:37:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:32.311 04:37:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:32.311 04:37:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:32.311 04:37:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:32.311 04:37:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:32.311 04:37:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:32.311 04:37:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:15:32.311 04:37:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:32.311 04:37:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:32.311 04:37:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:32.311 04:37:39 -- common/autotest_common.sh@1543 -- # continue 00:15:32.311 04:37:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:32.311 04:37:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:32.311 04:37:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:32.311 04:37:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:32.312 04:37:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1543 -- # continue 00:15:32.312 04:37:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:32.312 04:37:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:32.312 04:37:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1543 -- # continue 00:15:32.312 04:37:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:32.312 04:37:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:15:32.312 04:37:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:32.312 04:37:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:32.312 04:37:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:32.312 04:37:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:32.312 04:37:39 -- common/autotest_common.sh@1543 -- # continue 00:15:32.312 04:37:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:15:32.312 04:37:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.312 04:37:39 -- common/autotest_common.sh@10 -- # set +x 00:15:32.312 04:37:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:15:32.312 04:37:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.312 04:37:39 -- common/autotest_common.sh@10 -- # set +x 00:15:32.312 04:37:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:33.500 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.500 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.500 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.500 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.500 04:37:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:15:33.500 04:37:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.500 04:37:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.500 04:37:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:15:33.500 04:37:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:15:33.500 04:37:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:15:33.500 04:37:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:15:33.500 04:37:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:15:33.500 04:37:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:15:33.500 04:37:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:15:33.500 04:37:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:15:33.500 04:37:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:33.500 04:37:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:15:33.500 04:37:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:33.500 04:37:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.500 04:37:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:33.500 04:37:40 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:33.500 04:37:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:33.500 04:37:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:33.500 04:37:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:33.500 04:37:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:33.500 04:37:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:33.500 04:37:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:33.500 04:37:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:33.500 04:37:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:15:33.500 04:37:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:33.500 04:37:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:33.500 04:37:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:15:33.500 04:37:40 -- common/autotest_common.sh@1572 -- # return 0 00:15:33.500 04:37:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:15:33.500 04:37:40 -- common/autotest_common.sh@1580 -- # return 0 00:15:33.500 04:37:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:15:33.500 04:37:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:15:33.500 04:37:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:33.500 04:37:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:33.500 04:37:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:15:33.500 04:37:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.500 04:37:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.500 04:37:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:15:33.500 04:37:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:33.500 04:37:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.500 04:37:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.500 04:37:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.761 ************************************ 00:15:33.761 START TEST env 00:15:33.761 ************************************ 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:33.761 * Looking for test storage... 00:15:33.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:33.761 04:37:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.761 04:37:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.761 04:37:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.761 04:37:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.761 04:37:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.761 04:37:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.761 04:37:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.761 04:37:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.761 04:37:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.761 04:37:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.761 04:37:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.761 04:37:40 env -- scripts/common.sh@344 -- # case "$op" in 00:15:33.761 04:37:40 env -- scripts/common.sh@345 -- # : 1 00:15:33.761 04:37:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.761 04:37:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.761 04:37:40 env -- scripts/common.sh@365 -- # decimal 1 00:15:33.761 04:37:40 env -- scripts/common.sh@353 -- # local d=1 00:15:33.761 04:37:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.761 04:37:40 env -- scripts/common.sh@355 -- # echo 1 00:15:33.761 04:37:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.761 04:37:40 env -- scripts/common.sh@366 -- # decimal 2 00:15:33.761 04:37:40 env -- scripts/common.sh@353 -- # local d=2 00:15:33.761 04:37:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.761 04:37:40 env -- scripts/common.sh@355 -- # echo 2 00:15:33.761 04:37:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.761 04:37:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.761 04:37:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.761 04:37:40 env -- scripts/common.sh@368 -- # return 0 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.761 --rc genhtml_branch_coverage=1 00:15:33.761 --rc genhtml_function_coverage=1 00:15:33.761 --rc genhtml_legend=1 00:15:33.761 --rc geninfo_all_blocks=1 00:15:33.761 --rc geninfo_unexecuted_blocks=1 00:15:33.761 00:15:33.761 ' 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.761 --rc genhtml_branch_coverage=1 00:15:33.761 --rc genhtml_function_coverage=1 00:15:33.761 --rc genhtml_legend=1 00:15:33.761 --rc geninfo_all_blocks=1 00:15:33.761 --rc geninfo_unexecuted_blocks=1 00:15:33.761 00:15:33.761 ' 00:15:33.761 04:37:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.761 --rc genhtml_branch_coverage=1 00:15:33.761 --rc genhtml_function_coverage=1 00:15:33.761 --rc genhtml_legend=1 00:15:33.761 --rc geninfo_all_blocks=1 00:15:33.761 --rc geninfo_unexecuted_blocks=1 00:15:33.762 00:15:33.762 ' 00:15:33.762 04:37:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:33.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.762 --rc genhtml_branch_coverage=1 00:15:33.762 --rc genhtml_function_coverage=1 00:15:33.762 --rc genhtml_legend=1 00:15:33.762 --rc geninfo_all_blocks=1 00:15:33.762 --rc geninfo_unexecuted_blocks=1 00:15:33.762 00:15:33.762 ' 00:15:33.762 04:37:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:33.762 04:37:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.762 04:37:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.762 04:37:40 env -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 ************************************ 00:15:33.762 START TEST env_memory 00:15:33.762 ************************************ 00:15:33.762 04:37:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:33.762 00:15:33.762 00:15:33.762 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.762 http://cunit.sourceforge.net/ 00:15:33.762 00:15:33.762 00:15:33.762 Suite: memory 00:15:33.762 Test: alloc and free memory map ...[2024-11-27 04:37:40.919020] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:33.762 passed 00:15:33.762 Test: mem map translation ...[2024-11-27 04:37:40.958217] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:33.762 [2024-11-27 04:37:40.958345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:33.762 [2024-11-27 04:37:40.958454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:33.762 [2024-11-27 04:37:40.958491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:34.022 passed 00:15:34.022 Test: mem map registration ...[2024-11-27 04:37:41.027136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:15:34.022 [2024-11-27 04:37:41.027263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:15:34.022 passed 00:15:34.022 Test: mem map adjacent registrations ...passed 00:15:34.022 00:15:34.022 Run Summary: Type Total Ran Passed Failed Inactive 00:15:34.022 suites 1 1 n/a 0 0 00:15:34.022 tests 4 4 4 0 0 00:15:34.022 asserts 152 152 152 0 n/a 00:15:34.022 00:15:34.022 Elapsed time = 0.234 seconds 00:15:34.022 00:15:34.022 real 0m0.271s 00:15:34.022 user 0m0.240s 00:15:34.022 sys 0m0.022s 00:15:34.022 04:37:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.023 04:37:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:15:34.023 ************************************ 00:15:34.023 END TEST env_memory 00:15:34.023 ************************************ 00:15:34.023 04:37:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:34.023 04:37:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.023 04:37:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.023 04:37:41 env -- common/autotest_common.sh@10 -- # set +x 00:15:34.023 ************************************ 00:15:34.023 START TEST env_vtophys 00:15:34.023 ************************************ 00:15:34.023 04:37:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:34.284 EAL: lib.eal log level changed from notice to debug 00:15:34.284 EAL: Detected lcore 0 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 1 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 2 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 3 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 4 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 5 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 6 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 7 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 8 as core 0 on socket 0 00:15:34.284 EAL: Detected lcore 9 as core 0 on socket 0 00:15:34.284 EAL: Maximum logical cores by configuration: 128 00:15:34.284 EAL: Detected CPU lcores: 10 00:15:34.284 EAL: Detected NUMA nodes: 1 00:15:34.284 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:15:34.284 EAL: Detected shared linkage of DPDK 00:15:34.284 EAL: No shared files mode enabled, IPC will be disabled 00:15:34.284 EAL: Selected IOVA mode 'PA' 00:15:34.284 EAL: Probing VFIO support... 00:15:34.284 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:34.284 EAL: VFIO modules not loaded, skipping VFIO support... 00:15:34.284 EAL: Ask a virtual area of 0x2e000 bytes 00:15:34.284 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:34.284 EAL: Setting up physically contiguous memory... 00:15:34.284 EAL: Setting maximum number of open files to 524288 00:15:34.284 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:34.284 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:34.284 EAL: Ask a virtual area of 0x61000 bytes 00:15:34.284 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:34.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:34.284 EAL: Ask a virtual area of 0x400000000 bytes 00:15:34.284 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:34.284 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:34.284 EAL: Ask a virtual area of 0x61000 bytes 00:15:34.284 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:34.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:34.284 EAL: Ask a virtual area of 0x400000000 bytes 00:15:34.284 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:34.284 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:34.284 EAL: Ask a virtual area of 0x61000 bytes 00:15:34.284 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:34.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:34.284 EAL: Ask a virtual area of 0x400000000 bytes 00:15:34.284 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:34.284 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:34.284 EAL: Ask a virtual area of 0x61000 bytes 00:15:34.284 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:34.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:34.284 EAL: Ask a virtual area of 0x400000000 bytes 00:15:34.284 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:34.284 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:34.284 EAL: Hugepages will be freed exactly as allocated. 00:15:34.284 EAL: No shared files mode enabled, IPC is disabled 00:15:34.284 EAL: No shared files mode enabled, IPC is disabled 00:15:34.284 EAL: TSC frequency is ~2600000 KHz 00:15:34.284 EAL: Main lcore 0 is ready (tid=7f7a10a1ba40;cpuset=[0]) 00:15:34.284 EAL: Trying to obtain current memory policy. 00:15:34.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.284 EAL: Restoring previous memory policy: 0 00:15:34.284 EAL: request: mp_malloc_sync 00:15:34.284 EAL: No shared files mode enabled, IPC is disabled 00:15:34.284 EAL: Heap on socket 0 was expanded by 2MB 00:15:34.284 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:34.284 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:34.284 EAL: Mem event callback 'spdk:(nil)' registered 00:15:34.284 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:15:34.284 00:15:34.284 00:15:34.284 CUnit - A unit testing framework for C - Version 2.1-3 00:15:34.284 http://cunit.sourceforge.net/ 00:15:34.284 00:15:34.284 00:15:34.284 Suite: components_suite 00:15:34.546 Test: vtophys_malloc_test ...passed 00:15:34.546 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:34.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.546 EAL: Restoring previous memory policy: 4 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was expanded by 4MB 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was shrunk by 4MB 00:15:34.546 EAL: Trying to obtain current memory policy. 00:15:34.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.546 EAL: Restoring previous memory policy: 4 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was expanded by 6MB 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was shrunk by 6MB 00:15:34.546 EAL: Trying to obtain current memory policy. 00:15:34.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.546 EAL: Restoring previous memory policy: 4 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was expanded by 10MB 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was shrunk by 10MB 00:15:34.546 EAL: Trying to obtain current memory policy. 00:15:34.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.546 EAL: Restoring previous memory policy: 4 00:15:34.546 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.546 EAL: request: mp_malloc_sync 00:15:34.546 EAL: No shared files mode enabled, IPC is disabled 00:15:34.546 EAL: Heap on socket 0 was expanded by 18MB 00:15:34.806 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.806 EAL: request: mp_malloc_sync 00:15:34.806 EAL: No shared files mode enabled, IPC is disabled 00:15:34.806 EAL: Heap on socket 0 was shrunk by 18MB 00:15:34.806 EAL: Trying to obtain current memory policy. 00:15:34.806 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.806 EAL: Restoring previous memory policy: 4 00:15:34.806 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.806 EAL: request: mp_malloc_sync 00:15:34.806 EAL: No shared files mode enabled, IPC is disabled 00:15:34.806 EAL: Heap on socket 0 was expanded by 34MB 00:15:34.806 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.806 EAL: request: mp_malloc_sync 00:15:34.806 EAL: No shared files mode enabled, IPC is disabled 00:15:34.806 EAL: Heap on socket 0 was shrunk by 34MB 00:15:34.806 EAL: Trying to obtain current memory policy. 00:15:34.806 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:34.806 EAL: Restoring previous memory policy: 4 00:15:34.806 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.806 EAL: request: mp_malloc_sync 00:15:34.806 EAL: No shared files mode enabled, IPC is disabled 00:15:34.807 EAL: Heap on socket 0 was expanded by 66MB 00:15:34.807 EAL: Calling mem event callback 'spdk:(nil)' 00:15:34.807 EAL: request: mp_malloc_sync 00:15:34.807 EAL: No shared files mode enabled, IPC is disabled 00:15:34.807 EAL: Heap on socket 0 was shrunk by 66MB 00:15:35.068 EAL: Trying to obtain current memory policy. 00:15:35.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:35.068 EAL: Restoring previous memory policy: 4 00:15:35.068 EAL: Calling mem event callback 'spdk:(nil)' 00:15:35.068 EAL: request: mp_malloc_sync 00:15:35.068 EAL: No shared files mode enabled, IPC is disabled 00:15:35.068 EAL: Heap on socket 0 was expanded by 130MB 00:15:35.068 EAL: Calling mem event callback 'spdk:(nil)' 00:15:35.068 EAL: request: mp_malloc_sync 00:15:35.068 EAL: No shared files mode enabled, IPC is disabled 00:15:35.068 EAL: Heap on socket 0 was shrunk by 130MB 00:15:35.328 EAL: Trying to obtain current memory policy. 00:15:35.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:35.328 EAL: Restoring previous memory policy: 4 00:15:35.328 EAL: Calling mem event callback 'spdk:(nil)' 00:15:35.328 EAL: request: mp_malloc_sync 00:15:35.328 EAL: No shared files mode enabled, IPC is disabled 00:15:35.328 EAL: Heap on socket 0 was expanded by 258MB 00:15:35.590 EAL: Calling mem event callback 'spdk:(nil)' 00:15:35.590 EAL: request: mp_malloc_sync 00:15:35.590 EAL: No shared files mode enabled, IPC is disabled 00:15:35.590 EAL: Heap on socket 0 was shrunk by 258MB 00:15:35.851 EAL: Trying to obtain current memory policy. 00:15:35.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:35.852 EAL: Restoring previous memory policy: 4 00:15:35.852 EAL: Calling mem event callback 'spdk:(nil)' 00:15:35.852 EAL: request: mp_malloc_sync 00:15:35.852 EAL: No shared files mode enabled, IPC is disabled 00:15:35.852 EAL: Heap on socket 0 was expanded by 514MB 00:15:36.423 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.423 EAL: request: mp_malloc_sync 00:15:36.423 EAL: No shared files mode enabled, IPC is disabled 00:15:36.423 EAL: Heap on socket 0 was shrunk by 514MB 00:15:36.995 EAL: Trying to obtain current memory policy. 00:15:36.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:37.255 EAL: Restoring previous memory policy: 4 00:15:37.255 EAL: Calling mem event callback 'spdk:(nil)' 00:15:37.255 EAL: request: mp_malloc_sync 00:15:37.255 EAL: No shared files mode enabled, IPC is disabled 00:15:37.255 EAL: Heap on socket 0 was expanded by 1026MB 00:15:38.641 EAL: Calling mem event callback 'spdk:(nil)' 00:15:38.641 EAL: request: mp_malloc_sync 00:15:38.641 EAL: No shared files mode enabled, IPC is disabled 00:15:38.641 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:39.583 passed 00:15:39.583 00:15:39.583 Run Summary: Type Total Ran Passed Failed Inactive 00:15:39.583 suites 1 1 n/a 0 0 00:15:39.583 tests 2 2 2 0 0 00:15:39.583 asserts 5817 5817 5817 0 n/a 00:15:39.583 00:15:39.583 Elapsed time = 5.142 seconds 00:15:39.583 EAL: Calling mem event callback 'spdk:(nil)' 00:15:39.583 EAL: request: mp_malloc_sync 00:15:39.583 EAL: No shared files mode enabled, IPC is disabled 00:15:39.583 EAL: Heap on socket 0 was shrunk by 2MB 00:15:39.583 EAL: No shared files mode enabled, IPC is disabled 00:15:39.583 EAL: No shared files mode enabled, IPC is disabled 00:15:39.583 EAL: No shared files mode enabled, IPC is disabled 00:15:39.583 00:15:39.583 real 0m5.420s 00:15:39.583 user 0m4.603s 00:15:39.583 sys 0m0.660s 00:15:39.583 04:37:46 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.583 04:37:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:39.583 ************************************ 00:15:39.583 END TEST env_vtophys 00:15:39.583 ************************************ 00:15:39.583 04:37:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:39.583 04:37:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:39.583 04:37:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.583 04:37:46 env -- common/autotest_common.sh@10 -- # set +x 00:15:39.583 ************************************ 00:15:39.583 START TEST env_pci 00:15:39.583 ************************************ 00:15:39.583 04:37:46 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:39.583 00:15:39.583 00:15:39.583 CUnit - A unit testing framework for C - Version 2.1-3 00:15:39.583 http://cunit.sourceforge.net/ 00:15:39.583 00:15:39.583 00:15:39.583 Suite: pci 00:15:39.583 Test: pci_hook ...[2024-11-27 04:37:46.715915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57014 has claimed it 00:15:39.583 EAL: Cannot find device (10000:00:01.0) 00:15:39.583 passed 00:15:39.583 00:15:39.583 Run Summary: Type Total Ran Passed Failed Inactive 00:15:39.583 suites 1 1 n/a 0 0 00:15:39.583 tests 1 1 1 0 0 00:15:39.583 asserts 25 25 25 0 n/a 00:15:39.583 00:15:39.583 Elapsed time = 0.006 seconds 00:15:39.583 EAL: Failed to attach device on primary process 00:15:39.583 00:15:39.583 real 0m0.064s 00:15:39.583 user 0m0.027s 00:15:39.583 sys 0m0.035s 00:15:39.583 04:37:46 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.583 ************************************ 00:15:39.583 04:37:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:39.583 END TEST env_pci 00:15:39.583 ************************************ 00:15:39.843 04:37:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:39.843 04:37:46 env -- env/env.sh@15 -- # uname 00:15:39.843 04:37:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:39.843 04:37:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:39.843 04:37:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:39.843 04:37:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.843 04:37:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.843 04:37:46 env -- common/autotest_common.sh@10 -- # set +x 00:15:39.843 ************************************ 00:15:39.843 START TEST env_dpdk_post_init 00:15:39.843 ************************************ 00:15:39.843 04:37:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:39.843 EAL: Detected CPU lcores: 10 00:15:39.843 EAL: Detected NUMA nodes: 1 00:15:39.843 EAL: Detected shared linkage of DPDK 00:15:39.843 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:39.843 EAL: Selected IOVA mode 'PA' 00:15:39.843 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:39.843 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:39.843 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:39.843 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:15:39.843 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:15:40.105 Starting DPDK initialization... 00:15:40.105 Starting SPDK post initialization... 00:15:40.105 SPDK NVMe probe 00:15:40.105 Attaching to 0000:00:10.0 00:15:40.105 Attaching to 0000:00:11.0 00:15:40.105 Attaching to 0000:00:12.0 00:15:40.105 Attaching to 0000:00:13.0 00:15:40.105 Attached to 0000:00:13.0 00:15:40.105 Attached to 0000:00:10.0 00:15:40.105 Attached to 0000:00:11.0 00:15:40.105 Attached to 0000:00:12.0 00:15:40.105 Cleaning up... 00:15:40.105 00:15:40.105 real 0m0.260s 00:15:40.105 user 0m0.094s 00:15:40.105 sys 0m0.064s 00:15:40.105 04:37:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.105 ************************************ 00:15:40.105 END TEST env_dpdk_post_init 00:15:40.105 ************************************ 00:15:40.105 04:37:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 04:37:47 env -- env/env.sh@26 -- # uname 00:15:40.105 04:37:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:40.105 04:37:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:40.105 04:37:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.105 04:37:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.105 04:37:47 env -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 ************************************ 00:15:40.105 START TEST env_mem_callbacks 00:15:40.105 ************************************ 00:15:40.105 04:37:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:40.105 EAL: Detected CPU lcores: 10 00:15:40.105 EAL: Detected NUMA nodes: 1 00:15:40.105 EAL: Detected shared linkage of DPDK 00:15:40.105 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:40.105 EAL: Selected IOVA mode 'PA' 00:15:40.105 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:40.105 00:15:40.105 00:15:40.105 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.105 http://cunit.sourceforge.net/ 00:15:40.105 00:15:40.105 00:15:40.105 Suite: memory 00:15:40.105 Test: test ... 00:15:40.105 register 0x200000200000 2097152 00:15:40.105 malloc 3145728 00:15:40.105 register 0x200000400000 4194304 00:15:40.364 buf 0x2000004fffc0 len 3145728 PASSED 00:15:40.364 malloc 64 00:15:40.364 buf 0x2000004ffec0 len 64 PASSED 00:15:40.364 malloc 4194304 00:15:40.364 register 0x200000800000 6291456 00:15:40.364 buf 0x2000009fffc0 len 4194304 PASSED 00:15:40.364 free 0x2000004fffc0 3145728 00:15:40.364 free 0x2000004ffec0 64 00:15:40.364 unregister 0x200000400000 4194304 PASSED 00:15:40.364 free 0x2000009fffc0 4194304 00:15:40.364 unregister 0x200000800000 6291456 PASSED 00:15:40.364 malloc 8388608 00:15:40.364 register 0x200000400000 10485760 00:15:40.364 buf 0x2000005fffc0 len 8388608 PASSED 00:15:40.364 free 0x2000005fffc0 8388608 00:15:40.364 unregister 0x200000400000 10485760 PASSED 00:15:40.364 passed 00:15:40.364 00:15:40.364 Run Summary: Type Total Ran Passed Failed Inactive 00:15:40.364 suites 1 1 n/a 0 0 00:15:40.364 tests 1 1 1 0 0 00:15:40.364 asserts 15 15 15 0 n/a 00:15:40.364 00:15:40.364 Elapsed time = 0.043 seconds 00:15:40.364 00:15:40.364 real 0m0.215s 00:15:40.364 user 0m0.065s 00:15:40.364 sys 0m0.046s 00:15:40.364 ************************************ 00:15:40.364 END TEST env_mem_callbacks 00:15:40.364 ************************************ 00:15:40.364 04:37:47 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.364 04:37:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:40.364 ************************************ 00:15:40.364 END TEST env 00:15:40.364 ************************************ 00:15:40.364 00:15:40.364 real 0m6.704s 00:15:40.364 user 0m5.201s 00:15:40.364 sys 0m1.018s 00:15:40.364 04:37:47 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.364 04:37:47 env -- common/autotest_common.sh@10 -- # set +x 00:15:40.364 04:37:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:40.364 04:37:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.364 04:37:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.364 04:37:47 -- common/autotest_common.sh@10 -- # set +x 00:15:40.364 ************************************ 00:15:40.364 START TEST rpc 00:15:40.364 ************************************ 00:15:40.364 04:37:47 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:40.364 * Looking for test storage... 00:15:40.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:40.364 04:37:47 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:40.364 04:37:47 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:40.364 04:37:47 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.625 04:37:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.625 04:37:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.625 04:37:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.625 04:37:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.625 04:37:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.625 04:37:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:40.625 04:37:47 rpc -- scripts/common.sh@345 -- # : 1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.625 04:37:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.625 04:37:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@353 -- # local d=1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.625 04:37:47 rpc -- scripts/common.sh@355 -- # echo 1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.625 04:37:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@353 -- # local d=2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.625 04:37:47 rpc -- scripts/common.sh@355 -- # echo 2 00:15:40.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.625 04:37:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.625 04:37:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.625 04:37:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.625 04:37:47 rpc -- scripts/common.sh@368 -- # return 0 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.625 --rc genhtml_branch_coverage=1 00:15:40.625 --rc genhtml_function_coverage=1 00:15:40.625 --rc genhtml_legend=1 00:15:40.625 --rc geninfo_all_blocks=1 00:15:40.625 --rc geninfo_unexecuted_blocks=1 00:15:40.625 00:15:40.625 ' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.625 --rc genhtml_branch_coverage=1 00:15:40.625 --rc genhtml_function_coverage=1 00:15:40.625 --rc genhtml_legend=1 00:15:40.625 --rc geninfo_all_blocks=1 00:15:40.625 --rc geninfo_unexecuted_blocks=1 00:15:40.625 00:15:40.625 ' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.625 --rc genhtml_branch_coverage=1 00:15:40.625 --rc genhtml_function_coverage=1 00:15:40.625 --rc genhtml_legend=1 00:15:40.625 --rc geninfo_all_blocks=1 00:15:40.625 --rc geninfo_unexecuted_blocks=1 00:15:40.625 00:15:40.625 ' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.625 --rc genhtml_branch_coverage=1 00:15:40.625 --rc genhtml_function_coverage=1 00:15:40.625 --rc genhtml_legend=1 00:15:40.625 --rc geninfo_all_blocks=1 00:15:40.625 --rc geninfo_unexecuted_blocks=1 00:15:40.625 00:15:40.625 ' 00:15:40.625 04:37:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57141 00:15:40.625 04:37:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.625 04:37:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57141 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 57141 ']' 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.625 04:37:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:40.625 04:37:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.626 04:37:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.626 04:37:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.626 04:37:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.626 [2024-11-27 04:37:47.705933] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:15:40.626 [2024-11-27 04:37:47.706235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:15:40.887 [2024-11-27 04:37:47.868113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.887 [2024-11-27 04:37:47.969985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:40.887 [2024-11-27 04:37:47.970192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57141' to capture a snapshot of events at runtime. 00:15:40.887 [2024-11-27 04:37:47.970290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.887 [2024-11-27 04:37:47.970326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.888 [2024-11-27 04:37:47.970346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57141 for offline analysis/debug. 00:15:40.888 [2024-11-27 04:37:47.971243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.460 04:37:48 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.460 04:37:48 rpc -- common/autotest_common.sh@868 -- # return 0 00:15:41.460 04:37:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:41.460 04:37:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:41.460 04:37:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:41.460 04:37:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:41.460 04:37:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.460 04:37:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.460 04:37:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 ************************************ 00:15:41.460 START TEST rpc_integrity 00:15:41.460 ************************************ 00:15:41.460 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:15:41.460 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:41.460 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.460 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.460 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:41.460 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:41.722 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:41.722 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:41.722 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.722 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.722 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:41.723 { 00:15:41.723 "name": "Malloc0", 00:15:41.723 "aliases": [ 00:15:41.723 "00b9cd36-1d29-4540-9ef7-0d263e15b216" 00:15:41.723 ], 00:15:41.723 "product_name": "Malloc disk", 00:15:41.723 "block_size": 512, 00:15:41.723 "num_blocks": 16384, 00:15:41.723 "uuid": "00b9cd36-1d29-4540-9ef7-0d263e15b216", 00:15:41.723 "assigned_rate_limits": { 00:15:41.723 "rw_ios_per_sec": 0, 00:15:41.723 "rw_mbytes_per_sec": 0, 00:15:41.723 "r_mbytes_per_sec": 0, 00:15:41.723 "w_mbytes_per_sec": 0 00:15:41.723 }, 00:15:41.723 "claimed": false, 00:15:41.723 "zoned": false, 00:15:41.723 "supported_io_types": { 00:15:41.723 "read": true, 00:15:41.723 "write": true, 00:15:41.723 "unmap": true, 00:15:41.723 "flush": true, 00:15:41.723 "reset": true, 00:15:41.723 "nvme_admin": false, 00:15:41.723 "nvme_io": false, 00:15:41.723 "nvme_io_md": false, 00:15:41.723 "write_zeroes": true, 00:15:41.723 "zcopy": true, 00:15:41.723 "get_zone_info": false, 00:15:41.723 "zone_management": false, 00:15:41.723 "zone_append": false, 00:15:41.723 "compare": false, 00:15:41.723 "compare_and_write": false, 00:15:41.723 "abort": true, 00:15:41.723 "seek_hole": false, 00:15:41.723 "seek_data": false, 00:15:41.723 "copy": true, 00:15:41.723 "nvme_iov_md": false 00:15:41.723 }, 00:15:41.723 "memory_domains": [ 00:15:41.723 { 00:15:41.723 "dma_device_id": "system", 00:15:41.723 "dma_device_type": 1 00:15:41.723 }, 00:15:41.723 { 00:15:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.723 "dma_device_type": 2 00:15:41.723 } 00:15:41.723 ], 00:15:41.723 "driver_specific": {} 00:15:41.723 } 00:15:41.723 ]' 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 [2024-11-27 04:37:48.733003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:41.723 [2024-11-27 04:37:48.733185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.723 [2024-11-27 04:37:48.733218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:41.723 [2024-11-27 04:37:48.733230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.723 [2024-11-27 04:37:48.735474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.723 [2024-11-27 04:37:48.735512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:41.723 Passthru0 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:41.723 { 00:15:41.723 "name": "Malloc0", 00:15:41.723 "aliases": [ 00:15:41.723 "00b9cd36-1d29-4540-9ef7-0d263e15b216" 00:15:41.723 ], 00:15:41.723 "product_name": "Malloc disk", 00:15:41.723 "block_size": 512, 00:15:41.723 "num_blocks": 16384, 00:15:41.723 "uuid": "00b9cd36-1d29-4540-9ef7-0d263e15b216", 00:15:41.723 "assigned_rate_limits": { 00:15:41.723 "rw_ios_per_sec": 0, 00:15:41.723 "rw_mbytes_per_sec": 0, 00:15:41.723 "r_mbytes_per_sec": 0, 00:15:41.723 "w_mbytes_per_sec": 0 00:15:41.723 }, 00:15:41.723 "claimed": true, 00:15:41.723 "claim_type": "exclusive_write", 00:15:41.723 "zoned": false, 00:15:41.723 "supported_io_types": { 00:15:41.723 "read": true, 00:15:41.723 "write": true, 00:15:41.723 "unmap": true, 00:15:41.723 "flush": true, 00:15:41.723 "reset": true, 00:15:41.723 "nvme_admin": false, 00:15:41.723 "nvme_io": false, 00:15:41.723 "nvme_io_md": false, 00:15:41.723 "write_zeroes": true, 00:15:41.723 "zcopy": true, 00:15:41.723 "get_zone_info": false, 00:15:41.723 "zone_management": false, 00:15:41.723 "zone_append": false, 00:15:41.723 "compare": false, 00:15:41.723 "compare_and_write": false, 00:15:41.723 "abort": true, 00:15:41.723 "seek_hole": false, 00:15:41.723 "seek_data": false, 00:15:41.723 "copy": true, 00:15:41.723 "nvme_iov_md": false 00:15:41.723 }, 00:15:41.723 "memory_domains": [ 00:15:41.723 { 00:15:41.723 "dma_device_id": "system", 00:15:41.723 "dma_device_type": 1 00:15:41.723 }, 00:15:41.723 { 00:15:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.723 "dma_device_type": 2 00:15:41.723 } 00:15:41.723 ], 00:15:41.723 "driver_specific": {} 00:15:41.723 }, 00:15:41.723 { 00:15:41.723 "name": "Passthru0", 00:15:41.723 "aliases": [ 00:15:41.723 "32b701e0-ccb1-501e-a8a4-15a8d9c023a9" 00:15:41.723 ], 00:15:41.723 "product_name": "passthru", 00:15:41.723 "block_size": 512, 00:15:41.723 "num_blocks": 16384, 00:15:41.723 "uuid": "32b701e0-ccb1-501e-a8a4-15a8d9c023a9", 00:15:41.723 "assigned_rate_limits": { 00:15:41.723 "rw_ios_per_sec": 0, 00:15:41.723 "rw_mbytes_per_sec": 0, 00:15:41.723 "r_mbytes_per_sec": 0, 00:15:41.723 "w_mbytes_per_sec": 0 00:15:41.723 }, 00:15:41.723 "claimed": false, 00:15:41.723 "zoned": false, 00:15:41.723 "supported_io_types": { 00:15:41.723 "read": true, 00:15:41.723 "write": true, 00:15:41.723 "unmap": true, 00:15:41.723 "flush": true, 00:15:41.723 "reset": true, 00:15:41.723 "nvme_admin": false, 00:15:41.723 "nvme_io": false, 00:15:41.723 "nvme_io_md": false, 00:15:41.723 "write_zeroes": true, 00:15:41.723 "zcopy": true, 00:15:41.723 "get_zone_info": false, 00:15:41.723 "zone_management": false, 00:15:41.723 "zone_append": false, 00:15:41.723 "compare": false, 00:15:41.723 "compare_and_write": false, 00:15:41.723 "abort": true, 00:15:41.723 "seek_hole": false, 00:15:41.723 "seek_data": false, 00:15:41.723 "copy": true, 00:15:41.723 "nvme_iov_md": false 00:15:41.723 }, 00:15:41.723 "memory_domains": [ 00:15:41.723 { 00:15:41.723 "dma_device_id": "system", 00:15:41.723 "dma_device_type": 1 00:15:41.723 }, 00:15:41.723 { 00:15:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.723 "dma_device_type": 2 00:15:41.723 } 00:15:41.723 ], 00:15:41.723 "driver_specific": { 00:15:41.723 "passthru": { 00:15:41.723 "name": "Passthru0", 00:15:41.723 "base_bdev_name": "Malloc0" 00:15:41.723 } 00:15:41.723 } 00:15:41.723 } 00:15:41.723 ]' 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:41.723 ************************************ 00:15:41.723 END TEST rpc_integrity 00:15:41.723 ************************************ 00:15:41.723 04:37:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:41.723 00:15:41.723 real 0m0.248s 00:15:41.723 user 0m0.121s 00:15:41.723 sys 0m0.043s 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.723 04:37:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 04:37:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:41.723 04:37:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.723 04:37:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.723 04:37:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.985 ************************************ 00:15:41.985 START TEST rpc_plugins 00:15:41.985 ************************************ 00:15:41.985 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:41.986 { 00:15:41.986 "name": "Malloc1", 00:15:41.986 "aliases": [ 00:15:41.986 "05a64f43-730c-4521-9160-3305e865c3a0" 00:15:41.986 ], 00:15:41.986 "product_name": "Malloc disk", 00:15:41.986 "block_size": 4096, 00:15:41.986 "num_blocks": 256, 00:15:41.986 "uuid": "05a64f43-730c-4521-9160-3305e865c3a0", 00:15:41.986 "assigned_rate_limits": { 00:15:41.986 "rw_ios_per_sec": 0, 00:15:41.986 "rw_mbytes_per_sec": 0, 00:15:41.986 "r_mbytes_per_sec": 0, 00:15:41.986 "w_mbytes_per_sec": 0 00:15:41.986 }, 00:15:41.986 "claimed": false, 00:15:41.986 "zoned": false, 00:15:41.986 "supported_io_types": { 00:15:41.986 "read": true, 00:15:41.986 "write": true, 00:15:41.986 "unmap": true, 00:15:41.986 "flush": true, 00:15:41.986 "reset": true, 00:15:41.986 "nvme_admin": false, 00:15:41.986 "nvme_io": false, 00:15:41.986 "nvme_io_md": false, 00:15:41.986 "write_zeroes": true, 00:15:41.986 "zcopy": true, 00:15:41.986 "get_zone_info": false, 00:15:41.986 "zone_management": false, 00:15:41.986 "zone_append": false, 00:15:41.986 "compare": false, 00:15:41.986 "compare_and_write": false, 00:15:41.986 "abort": true, 00:15:41.986 "seek_hole": false, 00:15:41.986 "seek_data": false, 00:15:41.986 "copy": true, 00:15:41.986 "nvme_iov_md": false 00:15:41.986 }, 00:15:41.986 "memory_domains": [ 00:15:41.986 { 00:15:41.986 "dma_device_id": "system", 00:15:41.986 "dma_device_type": 1 00:15:41.986 }, 00:15:41.986 { 00:15:41.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.986 "dma_device_type": 2 00:15:41.986 } 00:15:41.986 ], 00:15:41.986 "driver_specific": {} 00:15:41.986 } 00:15:41.986 ]' 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:41.986 04:37:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.986 04:37:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.986 04:37:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.986 04:37:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:41.986 04:37:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:41.986 ************************************ 00:15:41.986 END TEST rpc_plugins 00:15:41.986 ************************************ 00:15:41.986 04:37:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:41.986 00:15:41.986 real 0m0.122s 00:15:41.986 user 0m0.063s 00:15:41.986 sys 0m0.018s 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.986 04:37:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:41.986 04:37:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.986 04:37:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.986 04:37:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 ************************************ 00:15:41.986 START TEST rpc_trace_cmd_test 00:15:41.986 ************************************ 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.986 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:41.986 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57141", 00:15:41.986 "tpoint_group_mask": "0x8", 00:15:41.986 "iscsi_conn": { 00:15:41.986 "mask": "0x2", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "scsi": { 00:15:41.986 "mask": "0x4", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "bdev": { 00:15:41.986 "mask": "0x8", 00:15:41.986 "tpoint_mask": "0xffffffffffffffff" 00:15:41.986 }, 00:15:41.986 "nvmf_rdma": { 00:15:41.986 "mask": "0x10", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "nvmf_tcp": { 00:15:41.986 "mask": "0x20", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "ftl": { 00:15:41.986 "mask": "0x40", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "blobfs": { 00:15:41.986 "mask": "0x80", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "dsa": { 00:15:41.986 "mask": "0x200", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "thread": { 00:15:41.986 "mask": "0x400", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "nvme_pcie": { 00:15:41.986 "mask": "0x800", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "iaa": { 00:15:41.986 "mask": "0x1000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "nvme_tcp": { 00:15:41.986 "mask": "0x2000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "bdev_nvme": { 00:15:41.986 "mask": "0x4000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "sock": { 00:15:41.986 "mask": "0x8000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "blob": { 00:15:41.986 "mask": "0x10000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "bdev_raid": { 00:15:41.986 "mask": "0x20000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 }, 00:15:41.986 "scheduler": { 00:15:41.986 "mask": "0x40000", 00:15:41.986 "tpoint_mask": "0x0" 00:15:41.986 } 00:15:41.986 }' 00:15:41.987 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:41.987 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:15:41.987 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:42.247 ************************************ 00:15:42.247 END TEST rpc_trace_cmd_test 00:15:42.247 ************************************ 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:42.247 00:15:42.247 real 0m0.171s 00:15:42.247 user 0m0.133s 00:15:42.247 sys 0m0.025s 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.247 04:37:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 04:37:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:42.247 04:37:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:42.247 04:37:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:42.247 04:37:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:42.247 04:37:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.247 04:37:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 ************************************ 00:15:42.247 START TEST rpc_daemon_integrity 00:15:42.247 ************************************ 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:42.247 { 00:15:42.247 "name": "Malloc2", 00:15:42.247 "aliases": [ 00:15:42.247 "e3ec544c-4ee3-4910-841f-180edc39eb9f" 00:15:42.247 ], 00:15:42.247 "product_name": "Malloc disk", 00:15:42.247 "block_size": 512, 00:15:42.247 "num_blocks": 16384, 00:15:42.247 "uuid": "e3ec544c-4ee3-4910-841f-180edc39eb9f", 00:15:42.247 "assigned_rate_limits": { 00:15:42.247 "rw_ios_per_sec": 0, 00:15:42.247 "rw_mbytes_per_sec": 0, 00:15:42.247 "r_mbytes_per_sec": 0, 00:15:42.247 "w_mbytes_per_sec": 0 00:15:42.247 }, 00:15:42.247 "claimed": false, 00:15:42.247 "zoned": false, 00:15:42.247 "supported_io_types": { 00:15:42.247 "read": true, 00:15:42.247 "write": true, 00:15:42.247 "unmap": true, 00:15:42.247 "flush": true, 00:15:42.247 "reset": true, 00:15:42.247 "nvme_admin": false, 00:15:42.247 "nvme_io": false, 00:15:42.247 "nvme_io_md": false, 00:15:42.247 "write_zeroes": true, 00:15:42.247 "zcopy": true, 00:15:42.247 "get_zone_info": false, 00:15:42.247 "zone_management": false, 00:15:42.247 "zone_append": false, 00:15:42.247 "compare": false, 00:15:42.247 "compare_and_write": false, 00:15:42.247 "abort": true, 00:15:42.247 "seek_hole": false, 00:15:42.247 "seek_data": false, 00:15:42.247 "copy": true, 00:15:42.247 "nvme_iov_md": false 00:15:42.247 }, 00:15:42.247 "memory_domains": [ 00:15:42.247 { 00:15:42.247 "dma_device_id": "system", 00:15:42.247 "dma_device_type": 1 00:15:42.247 }, 00:15:42.247 { 00:15:42.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.247 "dma_device_type": 2 00:15:42.247 } 00:15:42.247 ], 00:15:42.247 "driver_specific": {} 00:15:42.247 } 00:15:42.247 ]' 00:15:42.247 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:42.509 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:42.509 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:42.509 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.509 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.509 [2024-11-27 04:37:49.456584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:42.509 [2024-11-27 04:37:49.456749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.510 [2024-11-27 04:37:49.456776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:42.510 [2024-11-27 04:37:49.456788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.510 [2024-11-27 04:37:49.458996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.510 [2024-11-27 04:37:49.459037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:42.510 Passthru0 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:42.510 { 00:15:42.510 "name": "Malloc2", 00:15:42.510 "aliases": [ 00:15:42.510 "e3ec544c-4ee3-4910-841f-180edc39eb9f" 00:15:42.510 ], 00:15:42.510 "product_name": "Malloc disk", 00:15:42.510 "block_size": 512, 00:15:42.510 "num_blocks": 16384, 00:15:42.510 "uuid": "e3ec544c-4ee3-4910-841f-180edc39eb9f", 00:15:42.510 "assigned_rate_limits": { 00:15:42.510 "rw_ios_per_sec": 0, 00:15:42.510 "rw_mbytes_per_sec": 0, 00:15:42.510 "r_mbytes_per_sec": 0, 00:15:42.510 "w_mbytes_per_sec": 0 00:15:42.510 }, 00:15:42.510 "claimed": true, 00:15:42.510 "claim_type": "exclusive_write", 00:15:42.510 "zoned": false, 00:15:42.510 "supported_io_types": { 00:15:42.510 "read": true, 00:15:42.510 "write": true, 00:15:42.510 "unmap": true, 00:15:42.510 "flush": true, 00:15:42.510 "reset": true, 00:15:42.510 "nvme_admin": false, 00:15:42.510 "nvme_io": false, 00:15:42.510 "nvme_io_md": false, 00:15:42.510 "write_zeroes": true, 00:15:42.510 "zcopy": true, 00:15:42.510 "get_zone_info": false, 00:15:42.510 "zone_management": false, 00:15:42.510 "zone_append": false, 00:15:42.510 "compare": false, 00:15:42.510 "compare_and_write": false, 00:15:42.510 "abort": true, 00:15:42.510 "seek_hole": false, 00:15:42.510 "seek_data": false, 00:15:42.510 "copy": true, 00:15:42.510 "nvme_iov_md": false 00:15:42.510 }, 00:15:42.510 "memory_domains": [ 00:15:42.510 { 00:15:42.510 "dma_device_id": "system", 00:15:42.510 "dma_device_type": 1 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.510 "dma_device_type": 2 00:15:42.510 } 00:15:42.510 ], 00:15:42.510 "driver_specific": {} 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "name": "Passthru0", 00:15:42.510 "aliases": [ 00:15:42.510 "1d277538-8766-5ec8-9484-d07763b90688" 00:15:42.510 ], 00:15:42.510 "product_name": "passthru", 00:15:42.510 "block_size": 512, 00:15:42.510 "num_blocks": 16384, 00:15:42.510 "uuid": "1d277538-8766-5ec8-9484-d07763b90688", 00:15:42.510 "assigned_rate_limits": { 00:15:42.510 "rw_ios_per_sec": 0, 00:15:42.510 "rw_mbytes_per_sec": 0, 00:15:42.510 "r_mbytes_per_sec": 0, 00:15:42.510 "w_mbytes_per_sec": 0 00:15:42.510 }, 00:15:42.510 "claimed": false, 00:15:42.510 "zoned": false, 00:15:42.510 "supported_io_types": { 00:15:42.510 "read": true, 00:15:42.510 "write": true, 00:15:42.510 "unmap": true, 00:15:42.510 "flush": true, 00:15:42.510 "reset": true, 00:15:42.510 "nvme_admin": false, 00:15:42.510 "nvme_io": false, 00:15:42.510 "nvme_io_md": false, 00:15:42.510 "write_zeroes": true, 00:15:42.510 "zcopy": true, 00:15:42.510 "get_zone_info": false, 00:15:42.510 "zone_management": false, 00:15:42.510 "zone_append": false, 00:15:42.510 "compare": false, 00:15:42.510 "compare_and_write": false, 00:15:42.510 "abort": true, 00:15:42.510 "seek_hole": false, 00:15:42.510 "seek_data": false, 00:15:42.510 "copy": true, 00:15:42.510 "nvme_iov_md": false 00:15:42.510 }, 00:15:42.510 "memory_domains": [ 00:15:42.510 { 00:15:42.510 "dma_device_id": "system", 00:15:42.510 "dma_device_type": 1 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.510 "dma_device_type": 2 00:15:42.510 } 00:15:42.510 ], 00:15:42.510 "driver_specific": { 00:15:42.510 "passthru": { 00:15:42.510 "name": "Passthru0", 00:15:42.510 "base_bdev_name": "Malloc2" 00:15:42.510 } 00:15:42.510 } 00:15:42.510 } 00:15:42.510 ]' 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:42.510 ************************************ 00:15:42.510 END TEST rpc_daemon_integrity 00:15:42.510 ************************************ 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:42.510 00:15:42.510 real 0m0.244s 00:15:42.510 user 0m0.127s 00:15:42.510 sys 0m0.033s 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.510 04:37:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:37:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:42.510 04:37:49 rpc -- rpc/rpc.sh@84 -- # killprocess 57141 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 57141 ']' 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@958 -- # kill -0 57141 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@959 -- # uname 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57141 00:15:42.510 killing process with pid 57141 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57141' 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@973 -- # kill 57141 00:15:42.510 04:37:49 rpc -- common/autotest_common.sh@978 -- # wait 57141 00:15:44.427 ************************************ 00:15:44.427 END TEST rpc 00:15:44.427 ************************************ 00:15:44.427 00:15:44.427 real 0m3.706s 00:15:44.427 user 0m4.087s 00:15:44.427 sys 0m0.641s 00:15:44.427 04:37:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.427 04:37:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.427 04:37:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:44.427 04:37:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.427 04:37:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.427 04:37:51 -- common/autotest_common.sh@10 -- # set +x 00:15:44.427 ************************************ 00:15:44.427 START TEST skip_rpc 00:15:44.427 ************************************ 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:44.427 * Looking for test storage... 00:15:44.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.427 04:37:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:44.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.427 --rc genhtml_branch_coverage=1 00:15:44.427 --rc genhtml_function_coverage=1 00:15:44.427 --rc genhtml_legend=1 00:15:44.427 --rc geninfo_all_blocks=1 00:15:44.427 --rc geninfo_unexecuted_blocks=1 00:15:44.427 00:15:44.427 ' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:44.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.427 --rc genhtml_branch_coverage=1 00:15:44.427 --rc genhtml_function_coverage=1 00:15:44.427 --rc genhtml_legend=1 00:15:44.427 --rc geninfo_all_blocks=1 00:15:44.427 --rc geninfo_unexecuted_blocks=1 00:15:44.427 00:15:44.427 ' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:44.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.427 --rc genhtml_branch_coverage=1 00:15:44.427 --rc genhtml_function_coverage=1 00:15:44.427 --rc genhtml_legend=1 00:15:44.427 --rc geninfo_all_blocks=1 00:15:44.427 --rc geninfo_unexecuted_blocks=1 00:15:44.427 00:15:44.427 ' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:44.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.427 --rc genhtml_branch_coverage=1 00:15:44.427 --rc genhtml_function_coverage=1 00:15:44.427 --rc genhtml_legend=1 00:15:44.427 --rc geninfo_all_blocks=1 00:15:44.427 --rc geninfo_unexecuted_blocks=1 00:15:44.427 00:15:44.427 ' 00:15:44.427 04:37:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:44.427 04:37:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:44.427 04:37:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.427 04:37:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.427 ************************************ 00:15:44.427 START TEST skip_rpc 00:15:44.427 ************************************ 00:15:44.427 04:37:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:15:44.427 04:37:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57359 00:15:44.427 04:37:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:44.427 04:37:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:44.427 04:37:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:44.427 [2024-11-27 04:37:51.508587] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:15:44.427 [2024-11-27 04:37:51.508752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57359 ] 00:15:44.687 [2024-11-27 04:37:51.679625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.687 [2024-11-27 04:37:51.782425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57359 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57359 ']' 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57359 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57359 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57359' 00:15:50.050 killing process with pid 57359 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57359 00:15:50.050 04:37:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57359 00:15:50.992 00:15:50.992 real 0m6.561s 00:15:50.992 user 0m6.156s 00:15:50.992 sys 0m0.292s 00:15:50.992 04:37:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.992 ************************************ 00:15:50.992 END TEST skip_rpc 00:15:50.992 ************************************ 00:15:50.992 04:37:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.992 04:37:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:50.992 04:37:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.992 04:37:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.992 04:37:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.992 ************************************ 00:15:50.993 START TEST skip_rpc_with_json 00:15:50.993 ************************************ 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57452 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57452 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57452 ']' 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:50.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.993 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:50.993 [2024-11-27 04:37:58.102341] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:15:50.993 [2024-11-27 04:37:58.102467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57452 ] 00:15:51.254 [2024-11-27 04:37:58.260806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.254 [2024-11-27 04:37:58.362975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:51.825 [2024-11-27 04:37:58.969865] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:51.825 request: 00:15:51.825 { 00:15:51.825 "trtype": "tcp", 00:15:51.825 "method": "nvmf_get_transports", 00:15:51.825 "req_id": 1 00:15:51.825 } 00:15:51.825 Got JSON-RPC error response 00:15:51.825 response: 00:15:51.825 { 00:15:51.825 "code": -19, 00:15:51.825 "message": "No such device" 00:15:51.825 } 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:51.825 [2024-11-27 04:37:58.977977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.825 04:37:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:52.091 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.091 04:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:52.091 { 00:15:52.091 "subsystems": [ 00:15:52.091 { 00:15:52.091 "subsystem": "fsdev", 00:15:52.091 "config": [ 00:15:52.091 { 00:15:52.091 "method": "fsdev_set_opts", 00:15:52.091 "params": { 00:15:52.091 "fsdev_io_pool_size": 65535, 00:15:52.091 "fsdev_io_cache_size": 256 00:15:52.091 } 00:15:52.091 } 00:15:52.091 ] 00:15:52.091 }, 00:15:52.091 { 00:15:52.091 "subsystem": "keyring", 00:15:52.091 "config": [] 00:15:52.091 }, 00:15:52.091 { 00:15:52.092 "subsystem": "iobuf", 00:15:52.092 "config": [ 00:15:52.092 { 00:15:52.092 "method": "iobuf_set_options", 00:15:52.092 "params": { 00:15:52.092 "small_pool_count": 8192, 00:15:52.092 "large_pool_count": 1024, 00:15:52.092 "small_bufsize": 8192, 00:15:52.092 "large_bufsize": 135168, 00:15:52.092 "enable_numa": false 00:15:52.092 } 00:15:52.092 } 00:15:52.092 ] 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "subsystem": "sock", 00:15:52.092 "config": [ 00:15:52.092 { 00:15:52.092 "method": "sock_set_default_impl", 00:15:52.092 "params": { 00:15:52.092 "impl_name": "posix" 00:15:52.092 } 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "method": "sock_impl_set_options", 00:15:52.092 "params": { 00:15:52.092 "impl_name": "ssl", 00:15:52.092 "recv_buf_size": 4096, 00:15:52.092 "send_buf_size": 4096, 00:15:52.092 "enable_recv_pipe": true, 00:15:52.092 "enable_quickack": false, 00:15:52.092 "enable_placement_id": 0, 00:15:52.092 "enable_zerocopy_send_server": true, 00:15:52.092 "enable_zerocopy_send_client": false, 00:15:52.092 "zerocopy_threshold": 0, 00:15:52.092 "tls_version": 0, 00:15:52.092 "enable_ktls": false 00:15:52.092 } 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "method": "sock_impl_set_options", 00:15:52.092 "params": { 00:15:52.092 "impl_name": "posix", 00:15:52.092 "recv_buf_size": 2097152, 00:15:52.092 "send_buf_size": 2097152, 00:15:52.092 "enable_recv_pipe": true, 00:15:52.092 "enable_quickack": false, 00:15:52.092 "enable_placement_id": 0, 00:15:52.092 "enable_zerocopy_send_server": true, 00:15:52.092 "enable_zerocopy_send_client": false, 00:15:52.092 "zerocopy_threshold": 0, 00:15:52.092 "tls_version": 0, 00:15:52.092 "enable_ktls": false 00:15:52.092 } 00:15:52.092 } 00:15:52.092 ] 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "subsystem": "vmd", 00:15:52.092 "config": [] 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "subsystem": "accel", 00:15:52.092 "config": [ 00:15:52.092 { 00:15:52.092 "method": "accel_set_options", 00:15:52.092 "params": { 00:15:52.092 "small_cache_size": 128, 00:15:52.092 "large_cache_size": 16, 00:15:52.092 "task_count": 2048, 00:15:52.092 "sequence_count": 2048, 00:15:52.092 "buf_count": 2048 00:15:52.092 } 00:15:52.092 } 00:15:52.092 ] 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "subsystem": "bdev", 00:15:52.092 "config": [ 00:15:52.092 { 00:15:52.092 "method": "bdev_set_options", 00:15:52.092 "params": { 00:15:52.092 "bdev_io_pool_size": 65535, 00:15:52.092 "bdev_io_cache_size": 256, 00:15:52.092 "bdev_auto_examine": true, 00:15:52.092 "iobuf_small_cache_size": 128, 00:15:52.092 "iobuf_large_cache_size": 16 00:15:52.092 } 00:15:52.092 }, 00:15:52.092 { 00:15:52.092 "method": "bdev_raid_set_options", 00:15:52.092 "params": { 00:15:52.092 "process_window_size_kb": 1024, 00:15:52.092 "process_max_bandwidth_mb_sec": 0 00:15:52.092 } 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "method": "bdev_iscsi_set_options", 00:15:52.093 "params": { 00:15:52.093 "timeout_sec": 30 00:15:52.093 } 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "method": "bdev_nvme_set_options", 00:15:52.093 "params": { 00:15:52.093 "action_on_timeout": "none", 00:15:52.093 "timeout_us": 0, 00:15:52.093 "timeout_admin_us": 0, 00:15:52.093 "keep_alive_timeout_ms": 10000, 00:15:52.093 "arbitration_burst": 0, 00:15:52.093 "low_priority_weight": 0, 00:15:52.093 "medium_priority_weight": 0, 00:15:52.093 "high_priority_weight": 0, 00:15:52.093 "nvme_adminq_poll_period_us": 10000, 00:15:52.093 "nvme_ioq_poll_period_us": 0, 00:15:52.093 "io_queue_requests": 0, 00:15:52.093 "delay_cmd_submit": true, 00:15:52.093 "transport_retry_count": 4, 00:15:52.093 "bdev_retry_count": 3, 00:15:52.093 "transport_ack_timeout": 0, 00:15:52.093 "ctrlr_loss_timeout_sec": 0, 00:15:52.093 "reconnect_delay_sec": 0, 00:15:52.093 "fast_io_fail_timeout_sec": 0, 00:15:52.093 "disable_auto_failback": false, 00:15:52.093 "generate_uuids": false, 00:15:52.093 "transport_tos": 0, 00:15:52.093 "nvme_error_stat": false, 00:15:52.093 "rdma_srq_size": 0, 00:15:52.093 "io_path_stat": false, 00:15:52.093 "allow_accel_sequence": false, 00:15:52.093 "rdma_max_cq_size": 0, 00:15:52.093 "rdma_cm_event_timeout_ms": 0, 00:15:52.093 "dhchap_digests": [ 00:15:52.093 "sha256", 00:15:52.093 "sha384", 00:15:52.093 "sha512" 00:15:52.093 ], 00:15:52.093 "dhchap_dhgroups": [ 00:15:52.093 "null", 00:15:52.093 "ffdhe2048", 00:15:52.093 "ffdhe3072", 00:15:52.093 "ffdhe4096", 00:15:52.093 "ffdhe6144", 00:15:52.093 "ffdhe8192" 00:15:52.093 ] 00:15:52.093 } 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "method": "bdev_nvme_set_hotplug", 00:15:52.093 "params": { 00:15:52.093 "period_us": 100000, 00:15:52.093 "enable": false 00:15:52.093 } 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "method": "bdev_wait_for_examine" 00:15:52.093 } 00:15:52.093 ] 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "subsystem": "scsi", 00:15:52.093 "config": null 00:15:52.093 }, 00:15:52.093 { 00:15:52.093 "subsystem": "scheduler", 00:15:52.093 "config": [ 00:15:52.093 { 00:15:52.093 "method": "framework_set_scheduler", 00:15:52.093 "params": { 00:15:52.094 "name": "static" 00:15:52.094 } 00:15:52.094 } 00:15:52.094 ] 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "subsystem": "vhost_scsi", 00:15:52.094 "config": [] 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "subsystem": "vhost_blk", 00:15:52.094 "config": [] 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "subsystem": "ublk", 00:15:52.094 "config": [] 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "subsystem": "nbd", 00:15:52.094 "config": [] 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "subsystem": "nvmf", 00:15:52.094 "config": [ 00:15:52.094 { 00:15:52.094 "method": "nvmf_set_config", 00:15:52.094 "params": { 00:15:52.094 "discovery_filter": "match_any", 00:15:52.094 "admin_cmd_passthru": { 00:15:52.094 "identify_ctrlr": false 00:15:52.094 }, 00:15:52.094 "dhchap_digests": [ 00:15:52.094 "sha256", 00:15:52.094 "sha384", 00:15:52.094 "sha512" 00:15:52.094 ], 00:15:52.094 "dhchap_dhgroups": [ 00:15:52.094 "null", 00:15:52.094 "ffdhe2048", 00:15:52.094 "ffdhe3072", 00:15:52.094 "ffdhe4096", 00:15:52.094 "ffdhe6144", 00:15:52.094 "ffdhe8192" 00:15:52.094 ] 00:15:52.094 } 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "method": "nvmf_set_max_subsystems", 00:15:52.094 "params": { 00:15:52.094 "max_subsystems": 1024 00:15:52.094 } 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "method": "nvmf_set_crdt", 00:15:52.094 "params": { 00:15:52.094 "crdt1": 0, 00:15:52.094 "crdt2": 0, 00:15:52.094 "crdt3": 0 00:15:52.094 } 00:15:52.094 }, 00:15:52.094 { 00:15:52.094 "method": "nvmf_create_transport", 00:15:52.094 "params": { 00:15:52.094 "trtype": "TCP", 00:15:52.094 "max_queue_depth": 128, 00:15:52.094 "max_io_qpairs_per_ctrlr": 127, 00:15:52.098 "in_capsule_data_size": 4096, 00:15:52.098 "max_io_size": 131072, 00:15:52.098 "io_unit_size": 131072, 00:15:52.098 "max_aq_depth": 128, 00:15:52.098 "num_shared_buffers": 511, 00:15:52.098 "buf_cache_size": 4294967295, 00:15:52.098 "dif_insert_or_strip": false, 00:15:52.098 "zcopy": false, 00:15:52.098 "c2h_success": true, 00:15:52.098 "sock_priority": 0, 00:15:52.098 "abort_timeout_sec": 1, 00:15:52.098 "ack_timeout": 0, 00:15:52.098 "data_wr_pool_size": 0 00:15:52.098 } 00:15:52.098 } 00:15:52.098 ] 00:15:52.098 }, 00:15:52.098 { 00:15:52.099 "subsystem": "iscsi", 00:15:52.099 "config": [ 00:15:52.099 { 00:15:52.099 "method": "iscsi_set_options", 00:15:52.099 "params": { 00:15:52.099 "node_base": "iqn.2016-06.io.spdk", 00:15:52.099 "max_sessions": 128, 00:15:52.099 "max_connections_per_session": 2, 00:15:52.099 "max_queue_depth": 64, 00:15:52.099 "default_time2wait": 2, 00:15:52.099 "default_time2retain": 20, 00:15:52.099 "first_burst_length": 8192, 00:15:52.099 "immediate_data": true, 00:15:52.099 "allow_duplicated_isid": false, 00:15:52.099 "error_recovery_level": 0, 00:15:52.099 "nop_timeout": 60, 00:15:52.099 "nop_in_interval": 30, 00:15:52.099 "disable_chap": false, 00:15:52.099 "require_chap": false, 00:15:52.099 "mutual_chap": false, 00:15:52.099 "chap_group": 0, 00:15:52.099 "max_large_datain_per_connection": 64, 00:15:52.099 "max_r2t_per_connection": 4, 00:15:52.099 "pdu_pool_size": 36864, 00:15:52.099 "immediate_data_pool_size": 16384, 00:15:52.099 "data_out_pool_size": 2048 00:15:52.099 } 00:15:52.099 } 00:15:52.099 ] 00:15:52.099 } 00:15:52.099 ] 00:15:52.099 } 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57452 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57452 ']' 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57452 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57452 00:15:52.099 killing process with pid 57452 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57452' 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57452 00:15:52.099 04:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57452 00:15:54.020 04:38:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57497 00:15:54.020 04:38:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:54.020 04:38:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57497 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57497 ']' 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57497 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57497 00:15:59.407 killing process with pid 57497 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57497' 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57497 00:15:59.407 04:38:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57497 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:00.347 00:16:00.347 real 0m9.234s 00:16:00.347 user 0m8.784s 00:16:00.347 sys 0m0.660s 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.347 ************************************ 00:16:00.347 END TEST skip_rpc_with_json 00:16:00.347 ************************************ 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:00.347 04:38:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.347 ************************************ 00:16:00.347 START TEST skip_rpc_with_delay 00:16:00.347 ************************************ 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:00.347 [2024-11-27 04:38:07.406027] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:16:00.347 ************************************ 00:16:00.347 END TEST skip_rpc_with_delay 00:16:00.347 ************************************ 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.347 00:16:00.347 real 0m0.131s 00:16:00.347 user 0m0.071s 00:16:00.347 sys 0m0.058s 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.347 04:38:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:16:00.347 04:38:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:16:00.347 04:38:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:16:00.347 04:38:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.347 04:38:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.347 ************************************ 00:16:00.347 START TEST exit_on_failed_rpc_init 00:16:00.347 ************************************ 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:16:00.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57620 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57620 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57620 ']' 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.347 04:38:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:00.652 [2024-11-27 04:38:07.603528] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:00.652 [2024-11-27 04:38:07.603794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57620 ] 00:16:00.652 [2024-11-27 04:38:07.770039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.939 [2024-11-27 04:38:07.872049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:01.510 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:01.510 [2024-11-27 04:38:08.553445] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:01.510 [2024-11-27 04:38:08.553569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:16:01.510 [2024-11-27 04:38:08.704915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.771 [2024-11-27 04:38:08.807769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.771 [2024-11-27 04:38:08.807852] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:01.771 [2024-11-27 04:38:08.807866] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:01.771 [2024-11-27 04:38:08.807879] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:02.032 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:16:02.032 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.032 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:16:02.032 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57620 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57620 ']' 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57620 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.033 04:38:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57620 00:16:02.033 killing process with pid 57620 00:16:02.033 04:38:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.033 04:38:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.033 04:38:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57620' 00:16:02.033 04:38:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57620 00:16:02.033 04:38:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57620 00:16:03.418 00:16:03.418 real 0m3.004s 00:16:03.418 user 0m3.279s 00:16:03.418 sys 0m0.444s 00:16:03.418 ************************************ 00:16:03.418 END TEST exit_on_failed_rpc_init 00:16:03.418 ************************************ 00:16:03.418 04:38:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.418 04:38:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 04:38:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:03.418 ************************************ 00:16:03.418 END TEST skip_rpc 00:16:03.418 ************************************ 00:16:03.418 00:16:03.418 real 0m19.332s 00:16:03.418 user 0m18.430s 00:16:03.418 sys 0m1.629s 00:16:03.418 04:38:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.418 04:38:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.681 04:38:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:03.681 04:38:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.681 04:38:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.681 04:38:10 -- common/autotest_common.sh@10 -- # set +x 00:16:03.681 ************************************ 00:16:03.681 START TEST rpc_client 00:16:03.681 ************************************ 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:03.681 * Looking for test storage... 00:16:03.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.681 04:38:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.681 --rc genhtml_branch_coverage=1 00:16:03.681 --rc genhtml_function_coverage=1 00:16:03.681 --rc genhtml_legend=1 00:16:03.681 --rc geninfo_all_blocks=1 00:16:03.681 --rc geninfo_unexecuted_blocks=1 00:16:03.681 00:16:03.681 ' 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.681 --rc genhtml_branch_coverage=1 00:16:03.681 --rc genhtml_function_coverage=1 00:16:03.681 --rc genhtml_legend=1 00:16:03.681 --rc geninfo_all_blocks=1 00:16:03.681 --rc geninfo_unexecuted_blocks=1 00:16:03.681 00:16:03.681 ' 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.681 --rc genhtml_branch_coverage=1 00:16:03.681 --rc genhtml_function_coverage=1 00:16:03.681 --rc genhtml_legend=1 00:16:03.681 --rc geninfo_all_blocks=1 00:16:03.681 --rc geninfo_unexecuted_blocks=1 00:16:03.681 00:16:03.681 ' 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.681 --rc genhtml_branch_coverage=1 00:16:03.681 --rc genhtml_function_coverage=1 00:16:03.681 --rc genhtml_legend=1 00:16:03.681 --rc geninfo_all_blocks=1 00:16:03.681 --rc geninfo_unexecuted_blocks=1 00:16:03.681 00:16:03.681 ' 00:16:03.681 04:38:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:16:03.681 OK 00:16:03.681 04:38:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:16:03.681 ************************************ 00:16:03.681 END TEST rpc_client 00:16:03.681 ************************************ 00:16:03.681 00:16:03.681 real 0m0.200s 00:16:03.681 user 0m0.118s 00:16:03.681 sys 0m0.079s 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.681 04:38:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:16:03.942 04:38:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:03.942 04:38:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.943 04:38:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.943 04:38:10 -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 ************************************ 00:16:03.943 START TEST json_config 00:16:03.943 ************************************ 00:16:03.943 04:38:10 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:03.943 04:38:10 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.943 04:38:10 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.943 04:38:10 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.943 04:38:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.943 04:38:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.943 04:38:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.943 04:38:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.943 04:38:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.943 04:38:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:16:03.943 04:38:11 json_config -- scripts/common.sh@345 -- # : 1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.943 04:38:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.943 04:38:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@353 -- # local d=1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.943 04:38:11 json_config -- scripts/common.sh@355 -- # echo 1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.943 04:38:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@353 -- # local d=2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.943 04:38:11 json_config -- scripts/common.sh@355 -- # echo 2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.943 04:38:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.943 04:38:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.943 04:38:11 json_config -- scripts/common.sh@368 -- # return 0 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.943 --rc genhtml_branch_coverage=1 00:16:03.943 --rc genhtml_function_coverage=1 00:16:03.943 --rc genhtml_legend=1 00:16:03.943 --rc geninfo_all_blocks=1 00:16:03.943 --rc geninfo_unexecuted_blocks=1 00:16:03.943 00:16:03.943 ' 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.943 --rc genhtml_branch_coverage=1 00:16:03.943 --rc genhtml_function_coverage=1 00:16:03.943 --rc genhtml_legend=1 00:16:03.943 --rc geninfo_all_blocks=1 00:16:03.943 --rc geninfo_unexecuted_blocks=1 00:16:03.943 00:16:03.943 ' 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.943 --rc genhtml_branch_coverage=1 00:16:03.943 --rc genhtml_function_coverage=1 00:16:03.943 --rc genhtml_legend=1 00:16:03.943 --rc geninfo_all_blocks=1 00:16:03.943 --rc geninfo_unexecuted_blocks=1 00:16:03.943 00:16:03.943 ' 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.943 --rc genhtml_branch_coverage=1 00:16:03.943 --rc genhtml_function_coverage=1 00:16:03.943 --rc genhtml_legend=1 00:16:03.943 --rc geninfo_all_blocks=1 00:16:03.943 --rc geninfo_unexecuted_blocks=1 00:16:03.943 00:16:03.943 ' 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.943 04:38:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.943 04:38:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.943 04:38:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.943 04:38:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.943 04:38:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.943 04:38:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.943 04:38:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.943 04:38:11 json_config -- paths/export.sh@5 -- # export PATH 00:16:03.943 04:38:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@51 -- # : 0 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.943 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.943 04:38:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:16:03.943 WARNING: No tests are enabled so not running JSON configuration tests 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:16:03.943 04:38:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:16:03.943 ************************************ 00:16:03.943 END TEST json_config 00:16:03.943 ************************************ 00:16:03.943 00:16:03.943 real 0m0.152s 00:16:03.943 user 0m0.095s 00:16:03.943 sys 0m0.052s 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.943 04:38:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 04:38:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:03.944 04:38:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.944 04:38:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.944 04:38:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 ************************************ 00:16:03.944 START TEST json_config_extra_key 00:16:03.944 ************************************ 00:16:03.944 04:38:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.205 04:38:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.205 --rc genhtml_branch_coverage=1 00:16:04.205 --rc genhtml_function_coverage=1 00:16:04.205 --rc genhtml_legend=1 00:16:04.205 --rc geninfo_all_blocks=1 00:16:04.205 --rc geninfo_unexecuted_blocks=1 00:16:04.205 00:16:04.205 ' 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.205 --rc genhtml_branch_coverage=1 00:16:04.205 --rc genhtml_function_coverage=1 00:16:04.205 --rc genhtml_legend=1 00:16:04.205 --rc geninfo_all_blocks=1 00:16:04.205 --rc geninfo_unexecuted_blocks=1 00:16:04.205 00:16:04.205 ' 00:16:04.205 04:38:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.206 --rc genhtml_branch_coverage=1 00:16:04.206 --rc genhtml_function_coverage=1 00:16:04.206 --rc genhtml_legend=1 00:16:04.206 --rc geninfo_all_blocks=1 00:16:04.206 --rc geninfo_unexecuted_blocks=1 00:16:04.206 00:16:04.206 ' 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.206 --rc genhtml_branch_coverage=1 00:16:04.206 --rc genhtml_function_coverage=1 00:16:04.206 --rc genhtml_legend=1 00:16:04.206 --rc geninfo_all_blocks=1 00:16:04.206 --rc geninfo_unexecuted_blocks=1 00:16:04.206 00:16:04.206 ' 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8ccf9d2e-9feb-41d7-a93b-57eb2269e94e 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.206 04:38:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.206 04:38:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.206 04:38:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.206 04:38:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.206 04:38:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.206 04:38:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.206 04:38:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.206 04:38:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:16:04.206 04:38:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.206 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.206 04:38:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:16:04.206 INFO: launching applications... 00:16:04.206 Waiting for target to run... 00:16:04.206 04:38:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57831 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57831 /var/tmp/spdk_tgt.sock 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57831 ']' 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:04.206 04:38:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.206 04:38:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:04.206 [2024-11-27 04:38:11.339911] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:04.206 [2024-11-27 04:38:11.340206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57831 ] 00:16:04.467 [2024-11-27 04:38:11.667732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.732 [2024-11-27 04:38:11.762144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.304 04:38:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.304 04:38:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:16:05.304 00:16:05.304 INFO: shutting down applications... 00:16:05.304 04:38:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:05.304 04:38:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57831 ]] 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57831 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57831 00:16:05.304 04:38:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:05.874 04:38:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:05.874 04:38:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:05.874 04:38:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57831 00:16:05.874 04:38:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:06.134 04:38:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:06.134 04:38:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:06.134 04:38:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57831 00:16:06.134 04:38:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:06.705 04:38:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:06.705 04:38:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:06.705 04:38:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57831 00:16:06.705 04:38:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57831 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:07.276 SPDK target shutdown done 00:16:07.276 04:38:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:07.276 Success 00:16:07.276 04:38:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:07.276 ************************************ 00:16:07.276 END TEST json_config_extra_key 00:16:07.276 ************************************ 00:16:07.276 00:16:07.276 real 0m3.170s 00:16:07.276 user 0m2.802s 00:16:07.276 sys 0m0.378s 00:16:07.276 04:38:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.276 04:38:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:07.276 04:38:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:07.276 04:38:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:07.276 04:38:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.276 04:38:14 -- common/autotest_common.sh@10 -- # set +x 00:16:07.276 ************************************ 00:16:07.276 START TEST alias_rpc 00:16:07.276 ************************************ 00:16:07.276 04:38:14 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:07.276 * Looking for test storage... 00:16:07.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:07.276 04:38:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:07.276 04:38:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:07.276 04:38:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.536 04:38:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:07.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.536 --rc genhtml_branch_coverage=1 00:16:07.536 --rc genhtml_function_coverage=1 00:16:07.536 --rc genhtml_legend=1 00:16:07.536 --rc geninfo_all_blocks=1 00:16:07.536 --rc geninfo_unexecuted_blocks=1 00:16:07.536 00:16:07.536 ' 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:07.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.536 --rc genhtml_branch_coverage=1 00:16:07.536 --rc genhtml_function_coverage=1 00:16:07.536 --rc genhtml_legend=1 00:16:07.536 --rc geninfo_all_blocks=1 00:16:07.536 --rc geninfo_unexecuted_blocks=1 00:16:07.536 00:16:07.536 ' 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:07.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.536 --rc genhtml_branch_coverage=1 00:16:07.536 --rc genhtml_function_coverage=1 00:16:07.536 --rc genhtml_legend=1 00:16:07.536 --rc geninfo_all_blocks=1 00:16:07.536 --rc geninfo_unexecuted_blocks=1 00:16:07.536 00:16:07.536 ' 00:16:07.536 04:38:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:07.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.536 --rc genhtml_branch_coverage=1 00:16:07.536 --rc genhtml_function_coverage=1 00:16:07.536 --rc genhtml_legend=1 00:16:07.536 --rc geninfo_all_blocks=1 00:16:07.536 --rc geninfo_unexecuted_blocks=1 00:16:07.537 00:16:07.537 ' 00:16:07.537 04:38:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:07.537 04:38:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57924 00:16:07.537 04:38:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57924 00:16:07.537 04:38:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57924 ']' 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.537 04:38:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.537 [2024-11-27 04:38:14.579432] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:07.537 [2024-11-27 04:38:14.579558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:16:07.537 [2024-11-27 04:38:14.735394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.796 [2024-11-27 04:38:14.839172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.402 04:38:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.402 04:38:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:08.402 04:38:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:08.663 04:38:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57924 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57924 ']' 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57924 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57924 00:16:08.663 killing process with pid 57924 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57924' 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 57924 00:16:08.663 04:38:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 57924 00:16:10.579 ************************************ 00:16:10.579 END TEST alias_rpc 00:16:10.579 ************************************ 00:16:10.579 00:16:10.579 real 0m2.915s 00:16:10.579 user 0m3.040s 00:16:10.579 sys 0m0.382s 00:16:10.579 04:38:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.579 04:38:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.579 04:38:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:16:10.579 04:38:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:10.579 04:38:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.579 04:38:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.579 04:38:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.579 ************************************ 00:16:10.579 START TEST spdkcli_tcp 00:16:10.579 ************************************ 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:10.579 * Looking for test storage... 00:16:10.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.579 04:38:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.579 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:10.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.579 --rc genhtml_branch_coverage=1 00:16:10.579 --rc genhtml_function_coverage=1 00:16:10.579 --rc genhtml_legend=1 00:16:10.580 --rc geninfo_all_blocks=1 00:16:10.580 --rc geninfo_unexecuted_blocks=1 00:16:10.580 00:16:10.580 ' 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.580 --rc genhtml_branch_coverage=1 00:16:10.580 --rc genhtml_function_coverage=1 00:16:10.580 --rc genhtml_legend=1 00:16:10.580 --rc geninfo_all_blocks=1 00:16:10.580 --rc geninfo_unexecuted_blocks=1 00:16:10.580 00:16:10.580 ' 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.580 --rc genhtml_branch_coverage=1 00:16:10.580 --rc genhtml_function_coverage=1 00:16:10.580 --rc genhtml_legend=1 00:16:10.580 --rc geninfo_all_blocks=1 00:16:10.580 --rc geninfo_unexecuted_blocks=1 00:16:10.580 00:16:10.580 ' 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.580 --rc genhtml_branch_coverage=1 00:16:10.580 --rc genhtml_function_coverage=1 00:16:10.580 --rc genhtml_legend=1 00:16:10.580 --rc geninfo_all_blocks=1 00:16:10.580 --rc geninfo_unexecuted_blocks=1 00:16:10.580 00:16:10.580 ' 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58020 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58020 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58020 ']' 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.580 04:38:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.580 04:38:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.580 [2024-11-27 04:38:17.556260] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:10.580 [2024-11-27 04:38:17.556384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58020 ] 00:16:10.580 [2024-11-27 04:38:17.712807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.842 [2024-11-27 04:38:17.816348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.842 [2024-11-27 04:38:17.816432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.419 04:38:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.419 04:38:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:16:11.419 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58037 00:16:11.419 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:11.419 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:11.725 [ 00:16:11.725 "bdev_malloc_delete", 00:16:11.726 "bdev_malloc_create", 00:16:11.726 "bdev_null_resize", 00:16:11.726 "bdev_null_delete", 00:16:11.726 "bdev_null_create", 00:16:11.726 "bdev_nvme_cuse_unregister", 00:16:11.726 "bdev_nvme_cuse_register", 00:16:11.726 "bdev_opal_new_user", 00:16:11.726 "bdev_opal_set_lock_state", 00:16:11.726 "bdev_opal_delete", 00:16:11.726 "bdev_opal_get_info", 00:16:11.726 "bdev_opal_create", 00:16:11.726 "bdev_nvme_opal_revert", 00:16:11.726 "bdev_nvme_opal_init", 00:16:11.726 "bdev_nvme_send_cmd", 00:16:11.726 "bdev_nvme_set_keys", 00:16:11.726 "bdev_nvme_get_path_iostat", 00:16:11.726 "bdev_nvme_get_mdns_discovery_info", 00:16:11.726 "bdev_nvme_stop_mdns_discovery", 00:16:11.726 "bdev_nvme_start_mdns_discovery", 00:16:11.726 "bdev_nvme_set_multipath_policy", 00:16:11.726 "bdev_nvme_set_preferred_path", 00:16:11.726 "bdev_nvme_get_io_paths", 00:16:11.726 "bdev_nvme_remove_error_injection", 00:16:11.726 "bdev_nvme_add_error_injection", 00:16:11.726 "bdev_nvme_get_discovery_info", 00:16:11.726 "bdev_nvme_stop_discovery", 00:16:11.726 "bdev_nvme_start_discovery", 00:16:11.726 "bdev_nvme_get_controller_health_info", 00:16:11.726 "bdev_nvme_disable_controller", 00:16:11.726 "bdev_nvme_enable_controller", 00:16:11.726 "bdev_nvme_reset_controller", 00:16:11.726 "bdev_nvme_get_transport_statistics", 00:16:11.726 "bdev_nvme_apply_firmware", 00:16:11.726 "bdev_nvme_detach_controller", 00:16:11.726 "bdev_nvme_get_controllers", 00:16:11.726 "bdev_nvme_attach_controller", 00:16:11.726 "bdev_nvme_set_hotplug", 00:16:11.726 "bdev_nvme_set_options", 00:16:11.726 "bdev_passthru_delete", 00:16:11.726 "bdev_passthru_create", 00:16:11.726 "bdev_lvol_set_parent_bdev", 00:16:11.726 "bdev_lvol_set_parent", 00:16:11.726 "bdev_lvol_check_shallow_copy", 00:16:11.726 "bdev_lvol_start_shallow_copy", 00:16:11.726 "bdev_lvol_grow_lvstore", 00:16:11.726 "bdev_lvol_get_lvols", 00:16:11.726 "bdev_lvol_get_lvstores", 00:16:11.726 "bdev_lvol_delete", 00:16:11.726 "bdev_lvol_set_read_only", 00:16:11.726 "bdev_lvol_resize", 00:16:11.726 "bdev_lvol_decouple_parent", 00:16:11.726 "bdev_lvol_inflate", 00:16:11.726 "bdev_lvol_rename", 00:16:11.726 "bdev_lvol_clone_bdev", 00:16:11.726 "bdev_lvol_clone", 00:16:11.726 "bdev_lvol_snapshot", 00:16:11.726 "bdev_lvol_create", 00:16:11.726 "bdev_lvol_delete_lvstore", 00:16:11.726 "bdev_lvol_rename_lvstore", 00:16:11.726 "bdev_lvol_create_lvstore", 00:16:11.726 "bdev_raid_set_options", 00:16:11.726 "bdev_raid_remove_base_bdev", 00:16:11.726 "bdev_raid_add_base_bdev", 00:16:11.726 "bdev_raid_delete", 00:16:11.726 "bdev_raid_create", 00:16:11.726 "bdev_raid_get_bdevs", 00:16:11.726 "bdev_error_inject_error", 00:16:11.726 "bdev_error_delete", 00:16:11.726 "bdev_error_create", 00:16:11.726 "bdev_split_delete", 00:16:11.726 "bdev_split_create", 00:16:11.726 "bdev_delay_delete", 00:16:11.726 "bdev_delay_create", 00:16:11.726 "bdev_delay_update_latency", 00:16:11.726 "bdev_zone_block_delete", 00:16:11.726 "bdev_zone_block_create", 00:16:11.726 "blobfs_create", 00:16:11.726 "blobfs_detect", 00:16:11.726 "blobfs_set_cache_size", 00:16:11.726 "bdev_xnvme_delete", 00:16:11.726 "bdev_xnvme_create", 00:16:11.726 "bdev_aio_delete", 00:16:11.726 "bdev_aio_rescan", 00:16:11.726 "bdev_aio_create", 00:16:11.726 "bdev_ftl_set_property", 00:16:11.726 "bdev_ftl_get_properties", 00:16:11.726 "bdev_ftl_get_stats", 00:16:11.726 "bdev_ftl_unmap", 00:16:11.726 "bdev_ftl_unload", 00:16:11.726 "bdev_ftl_delete", 00:16:11.726 "bdev_ftl_load", 00:16:11.726 "bdev_ftl_create", 00:16:11.726 "bdev_virtio_attach_controller", 00:16:11.726 "bdev_virtio_scsi_get_devices", 00:16:11.726 "bdev_virtio_detach_controller", 00:16:11.726 "bdev_virtio_blk_set_hotplug", 00:16:11.726 "bdev_iscsi_delete", 00:16:11.726 "bdev_iscsi_create", 00:16:11.726 "bdev_iscsi_set_options", 00:16:11.726 "accel_error_inject_error", 00:16:11.726 "ioat_scan_accel_module", 00:16:11.726 "dsa_scan_accel_module", 00:16:11.726 "iaa_scan_accel_module", 00:16:11.726 "keyring_file_remove_key", 00:16:11.726 "keyring_file_add_key", 00:16:11.726 "keyring_linux_set_options", 00:16:11.726 "fsdev_aio_delete", 00:16:11.726 "fsdev_aio_create", 00:16:11.726 "iscsi_get_histogram", 00:16:11.726 "iscsi_enable_histogram", 00:16:11.726 "iscsi_set_options", 00:16:11.726 "iscsi_get_auth_groups", 00:16:11.726 "iscsi_auth_group_remove_secret", 00:16:11.726 "iscsi_auth_group_add_secret", 00:16:11.726 "iscsi_delete_auth_group", 00:16:11.726 "iscsi_create_auth_group", 00:16:11.726 "iscsi_set_discovery_auth", 00:16:11.726 "iscsi_get_options", 00:16:11.726 "iscsi_target_node_request_logout", 00:16:11.726 "iscsi_target_node_set_redirect", 00:16:11.726 "iscsi_target_node_set_auth", 00:16:11.726 "iscsi_target_node_add_lun", 00:16:11.726 "iscsi_get_stats", 00:16:11.726 "iscsi_get_connections", 00:16:11.726 "iscsi_portal_group_set_auth", 00:16:11.726 "iscsi_start_portal_group", 00:16:11.726 "iscsi_delete_portal_group", 00:16:11.726 "iscsi_create_portal_group", 00:16:11.726 "iscsi_get_portal_groups", 00:16:11.726 "iscsi_delete_target_node", 00:16:11.726 "iscsi_target_node_remove_pg_ig_maps", 00:16:11.726 "iscsi_target_node_add_pg_ig_maps", 00:16:11.726 "iscsi_create_target_node", 00:16:11.726 "iscsi_get_target_nodes", 00:16:11.726 "iscsi_delete_initiator_group", 00:16:11.726 "iscsi_initiator_group_remove_initiators", 00:16:11.726 "iscsi_initiator_group_add_initiators", 00:16:11.726 "iscsi_create_initiator_group", 00:16:11.726 "iscsi_get_initiator_groups", 00:16:11.726 "nvmf_set_crdt", 00:16:11.726 "nvmf_set_config", 00:16:11.726 "nvmf_set_max_subsystems", 00:16:11.726 "nvmf_stop_mdns_prr", 00:16:11.726 "nvmf_publish_mdns_prr", 00:16:11.726 "nvmf_subsystem_get_listeners", 00:16:11.726 "nvmf_subsystem_get_qpairs", 00:16:11.726 "nvmf_subsystem_get_controllers", 00:16:11.726 "nvmf_get_stats", 00:16:11.726 "nvmf_get_transports", 00:16:11.726 "nvmf_create_transport", 00:16:11.726 "nvmf_get_targets", 00:16:11.726 "nvmf_delete_target", 00:16:11.726 "nvmf_create_target", 00:16:11.726 "nvmf_subsystem_allow_any_host", 00:16:11.726 "nvmf_subsystem_set_keys", 00:16:11.726 "nvmf_subsystem_remove_host", 00:16:11.726 "nvmf_subsystem_add_host", 00:16:11.726 "nvmf_ns_remove_host", 00:16:11.726 "nvmf_ns_add_host", 00:16:11.726 "nvmf_subsystem_remove_ns", 00:16:11.726 "nvmf_subsystem_set_ns_ana_group", 00:16:11.726 "nvmf_subsystem_add_ns", 00:16:11.726 "nvmf_subsystem_listener_set_ana_state", 00:16:11.726 "nvmf_discovery_get_referrals", 00:16:11.726 "nvmf_discovery_remove_referral", 00:16:11.726 "nvmf_discovery_add_referral", 00:16:11.726 "nvmf_subsystem_remove_listener", 00:16:11.726 "nvmf_subsystem_add_listener", 00:16:11.726 "nvmf_delete_subsystem", 00:16:11.726 "nvmf_create_subsystem", 00:16:11.726 "nvmf_get_subsystems", 00:16:11.726 "env_dpdk_get_mem_stats", 00:16:11.726 "nbd_get_disks", 00:16:11.726 "nbd_stop_disk", 00:16:11.726 "nbd_start_disk", 00:16:11.726 "ublk_recover_disk", 00:16:11.726 "ublk_get_disks", 00:16:11.726 "ublk_stop_disk", 00:16:11.726 "ublk_start_disk", 00:16:11.726 "ublk_destroy_target", 00:16:11.726 "ublk_create_target", 00:16:11.726 "virtio_blk_create_transport", 00:16:11.726 "virtio_blk_get_transports", 00:16:11.726 "vhost_controller_set_coalescing", 00:16:11.726 "vhost_get_controllers", 00:16:11.726 "vhost_delete_controller", 00:16:11.726 "vhost_create_blk_controller", 00:16:11.726 "vhost_scsi_controller_remove_target", 00:16:11.726 "vhost_scsi_controller_add_target", 00:16:11.726 "vhost_start_scsi_controller", 00:16:11.726 "vhost_create_scsi_controller", 00:16:11.726 "thread_set_cpumask", 00:16:11.726 "scheduler_set_options", 00:16:11.726 "framework_get_governor", 00:16:11.726 "framework_get_scheduler", 00:16:11.726 "framework_set_scheduler", 00:16:11.726 "framework_get_reactors", 00:16:11.726 "thread_get_io_channels", 00:16:11.726 "thread_get_pollers", 00:16:11.726 "thread_get_stats", 00:16:11.726 "framework_monitor_context_switch", 00:16:11.726 "spdk_kill_instance", 00:16:11.726 "log_enable_timestamps", 00:16:11.726 "log_get_flags", 00:16:11.726 "log_clear_flag", 00:16:11.726 "log_set_flag", 00:16:11.726 "log_get_level", 00:16:11.726 "log_set_level", 00:16:11.726 "log_get_print_level", 00:16:11.726 "log_set_print_level", 00:16:11.726 "framework_enable_cpumask_locks", 00:16:11.726 "framework_disable_cpumask_locks", 00:16:11.726 "framework_wait_init", 00:16:11.726 "framework_start_init", 00:16:11.726 "scsi_get_devices", 00:16:11.726 "bdev_get_histogram", 00:16:11.726 "bdev_enable_histogram", 00:16:11.726 "bdev_set_qos_limit", 00:16:11.726 "bdev_set_qd_sampling_period", 00:16:11.726 "bdev_get_bdevs", 00:16:11.726 "bdev_reset_iostat", 00:16:11.726 "bdev_get_iostat", 00:16:11.726 "bdev_examine", 00:16:11.726 "bdev_wait_for_examine", 00:16:11.726 "bdev_set_options", 00:16:11.726 "accel_get_stats", 00:16:11.726 "accel_set_options", 00:16:11.726 "accel_set_driver", 00:16:11.726 "accel_crypto_key_destroy", 00:16:11.726 "accel_crypto_keys_get", 00:16:11.726 "accel_crypto_key_create", 00:16:11.726 "accel_assign_opc", 00:16:11.726 "accel_get_module_info", 00:16:11.726 "accel_get_opc_assignments", 00:16:11.726 "vmd_rescan", 00:16:11.726 "vmd_remove_device", 00:16:11.727 "vmd_enable", 00:16:11.727 "sock_get_default_impl", 00:16:11.727 "sock_set_default_impl", 00:16:11.727 "sock_impl_set_options", 00:16:11.727 "sock_impl_get_options", 00:16:11.727 "iobuf_get_stats", 00:16:11.727 "iobuf_set_options", 00:16:11.727 "keyring_get_keys", 00:16:11.727 "framework_get_pci_devices", 00:16:11.727 "framework_get_config", 00:16:11.727 "framework_get_subsystems", 00:16:11.727 "fsdev_set_opts", 00:16:11.727 "fsdev_get_opts", 00:16:11.727 "trace_get_info", 00:16:11.727 "trace_get_tpoint_group_mask", 00:16:11.727 "trace_disable_tpoint_group", 00:16:11.727 "trace_enable_tpoint_group", 00:16:11.727 "trace_clear_tpoint_mask", 00:16:11.727 "trace_set_tpoint_mask", 00:16:11.727 "notify_get_notifications", 00:16:11.727 "notify_get_types", 00:16:11.727 "spdk_get_version", 00:16:11.727 "rpc_get_methods" 00:16:11.727 ] 00:16:11.727 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:11.727 04:38:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58020 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58020 ']' 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58020 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58020 00:16:11.727 killing process with pid 58020 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58020' 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58020 00:16:11.727 04:38:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58020 00:16:13.180 ************************************ 00:16:13.180 END TEST spdkcli_tcp 00:16:13.180 ************************************ 00:16:13.180 00:16:13.180 real 0m2.935s 00:16:13.180 user 0m5.284s 00:16:13.180 sys 0m0.438s 00:16:13.180 04:38:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.180 04:38:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.180 04:38:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:13.180 04:38:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.180 04:38:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.180 04:38:20 -- common/autotest_common.sh@10 -- # set +x 00:16:13.180 ************************************ 00:16:13.180 START TEST dpdk_mem_utility 00:16:13.180 ************************************ 00:16:13.180 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:13.441 * Looking for test storage... 00:16:13.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:13.441 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.441 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.441 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.441 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.441 04:38:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.441 04:38:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.441 04:38:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.441 04:38:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.442 04:38:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.442 --rc genhtml_branch_coverage=1 00:16:13.442 --rc genhtml_function_coverage=1 00:16:13.442 --rc genhtml_legend=1 00:16:13.442 --rc geninfo_all_blocks=1 00:16:13.442 --rc geninfo_unexecuted_blocks=1 00:16:13.442 00:16:13.442 ' 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.442 --rc genhtml_branch_coverage=1 00:16:13.442 --rc genhtml_function_coverage=1 00:16:13.442 --rc genhtml_legend=1 00:16:13.442 --rc geninfo_all_blocks=1 00:16:13.442 --rc geninfo_unexecuted_blocks=1 00:16:13.442 00:16:13.442 ' 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.442 --rc genhtml_branch_coverage=1 00:16:13.442 --rc genhtml_function_coverage=1 00:16:13.442 --rc genhtml_legend=1 00:16:13.442 --rc geninfo_all_blocks=1 00:16:13.442 --rc geninfo_unexecuted_blocks=1 00:16:13.442 00:16:13.442 ' 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.442 --rc genhtml_branch_coverage=1 00:16:13.442 --rc genhtml_function_coverage=1 00:16:13.442 --rc genhtml_legend=1 00:16:13.442 --rc geninfo_all_blocks=1 00:16:13.442 --rc geninfo_unexecuted_blocks=1 00:16:13.442 00:16:13.442 ' 00:16:13.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.442 04:38:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:13.442 04:38:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58131 00:16:13.442 04:38:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58131 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58131 ']' 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.442 04:38:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.442 04:38:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:13.442 [2024-11-27 04:38:20.535956] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:13.442 [2024-11-27 04:38:20.536627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:16:13.703 [2024-11-27 04:38:20.692728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.703 [2024-11-27 04:38:20.794172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.275 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.275 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:16:14.275 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:14.275 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:14.275 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.275 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:14.275 { 00:16:14.275 "filename": "/tmp/spdk_mem_dump.txt" 00:16:14.275 } 00:16:14.275 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.275 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:14.275 DPDK memory size 824.000000 MiB in 1 heap(s) 00:16:14.275 1 heaps totaling size 824.000000 MiB 00:16:14.275 size: 824.000000 MiB heap id: 0 00:16:14.275 end heaps---------- 00:16:14.275 9 mempools totaling size 603.782043 MiB 00:16:14.275 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:14.275 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:14.275 size: 100.555481 MiB name: bdev_io_58131 00:16:14.275 size: 50.003479 MiB name: msgpool_58131 00:16:14.275 size: 36.509338 MiB name: fsdev_io_58131 00:16:14.275 size: 21.763794 MiB name: PDU_Pool 00:16:14.275 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:14.275 size: 4.133484 MiB name: evtpool_58131 00:16:14.275 size: 0.026123 MiB name: Session_Pool 00:16:14.275 end mempools------- 00:16:14.275 6 memzones totaling size 4.142822 MiB 00:16:14.275 size: 1.000366 MiB name: RG_ring_0_58131 00:16:14.275 size: 1.000366 MiB name: RG_ring_1_58131 00:16:14.275 size: 1.000366 MiB name: RG_ring_4_58131 00:16:14.275 size: 1.000366 MiB name: RG_ring_5_58131 00:16:14.275 size: 0.125366 MiB name: RG_ring_2_58131 00:16:14.275 size: 0.015991 MiB name: RG_ring_3_58131 00:16:14.275 end memzones------- 00:16:14.275 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:14.538 heap id: 0 total size: 824.000000 MiB number of busy elements: 325 number of free elements: 18 00:16:14.538 list of free elements. size: 16.778931 MiB 00:16:14.538 element at address: 0x200006400000 with size: 1.995972 MiB 00:16:14.538 element at address: 0x20000a600000 with size: 1.995972 MiB 00:16:14.538 element at address: 0x200003e00000 with size: 1.991028 MiB 00:16:14.538 element at address: 0x200019500040 with size: 0.999939 MiB 00:16:14.538 element at address: 0x200019900040 with size: 0.999939 MiB 00:16:14.538 element at address: 0x200019a00000 with size: 0.999084 MiB 00:16:14.538 element at address: 0x200032600000 with size: 0.994324 MiB 00:16:14.538 element at address: 0x200000400000 with size: 0.992004 MiB 00:16:14.538 element at address: 0x200019200000 with size: 0.959656 MiB 00:16:14.538 element at address: 0x200019d00040 with size: 0.936401 MiB 00:16:14.538 element at address: 0x200000200000 with size: 0.716980 MiB 00:16:14.538 element at address: 0x20001b400000 with size: 0.559509 MiB 00:16:14.538 element at address: 0x200000c00000 with size: 0.489197 MiB 00:16:14.538 element at address: 0x200019600000 with size: 0.487976 MiB 00:16:14.538 element at address: 0x200019e00000 with size: 0.485413 MiB 00:16:14.538 element at address: 0x200012c00000 with size: 0.433228 MiB 00:16:14.538 element at address: 0x200028800000 with size: 0.391418 MiB 00:16:14.538 element at address: 0x200000800000 with size: 0.350891 MiB 00:16:14.538 list of standard malloc elements. size: 199.290161 MiB 00:16:14.538 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:16:14.538 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:16:14.538 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:16:14.538 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:16:14.538 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:16:14.538 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:16:14.538 element at address: 0x200019deff40 with size: 0.062683 MiB 00:16:14.538 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:16:14.538 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:16:14.538 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:16:14.538 element at address: 0x200012bff040 with size: 0.000305 MiB 00:16:14.538 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:16:14.538 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200000cff000 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff180 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff280 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff380 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff480 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff580 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff680 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff780 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff880 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bff980 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200019affc40 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:16:14.539 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:16:14.540 element at address: 0x200028864340 with size: 0.000244 MiB 00:16:14.540 element at address: 0x200028864440 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b100 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b380 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b480 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b580 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b680 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b780 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b880 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886b980 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886be80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c080 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c180 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c280 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c380 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c480 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c580 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c680 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c780 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c880 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886c980 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d080 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d180 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d280 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d380 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d480 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d580 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d680 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d780 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d880 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886d980 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886da80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886db80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886de80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886df80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e080 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e180 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e280 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e380 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e480 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e580 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e680 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e780 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e880 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886e980 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f080 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f180 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f280 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f380 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f480 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f580 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f680 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f780 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f880 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886f980 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:16:14.540 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:16:14.540 list of memzone associated elements. size: 607.930908 MiB 00:16:14.540 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:16:14.540 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:14.540 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:16:14.540 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:14.540 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:16:14.540 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58131_0 00:16:14.540 element at address: 0x200000dff340 with size: 48.003113 MiB 00:16:14.540 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58131_0 00:16:14.540 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:16:14.540 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58131_0 00:16:14.540 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:16:14.540 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:14.540 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:16:14.540 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:14.541 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:16:14.541 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58131_0 00:16:14.541 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:16:14.541 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58131 00:16:14.541 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:16:14.541 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58131 00:16:14.541 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:16:14.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:14.541 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:16:14.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:14.541 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:16:14.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:14.541 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:16:14.541 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:14.541 element at address: 0x200000cff100 with size: 1.000549 MiB 00:16:14.541 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58131 00:16:14.541 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:16:14.541 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58131 00:16:14.541 element at address: 0x200019affd40 with size: 1.000549 MiB 00:16:14.541 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58131 00:16:14.541 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:16:14.541 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58131 00:16:14.541 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:16:14.541 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58131 00:16:14.541 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:16:14.541 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58131 00:16:14.541 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:16:14.541 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:14.541 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:16:14.541 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:14.541 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:16:14.541 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:14.541 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:16:14.541 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58131 00:16:14.541 element at address: 0x20000085df80 with size: 0.125549 MiB 00:16:14.541 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58131 00:16:14.541 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:16:14.541 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:14.541 element at address: 0x200028864540 with size: 0.023804 MiB 00:16:14.541 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:14.541 element at address: 0x200000859d40 with size: 0.016174 MiB 00:16:14.541 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58131 00:16:14.541 element at address: 0x20002886a6c0 with size: 0.002502 MiB 00:16:14.541 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:14.541 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:16:14.541 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58131 00:16:14.541 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:16:14.541 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58131 00:16:14.541 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:16:14.541 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58131 00:16:14.541 element at address: 0x20002886b200 with size: 0.000366 MiB 00:16:14.541 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:14.541 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:14.541 04:38:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58131 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58131 ']' 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58131 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58131 00:16:14.541 killing process with pid 58131 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58131' 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58131 00:16:14.541 04:38:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58131 00:16:15.938 ************************************ 00:16:15.938 END TEST dpdk_mem_utility 00:16:15.938 ************************************ 00:16:15.938 00:16:15.938 real 0m2.718s 00:16:15.938 user 0m2.725s 00:16:15.938 sys 0m0.390s 00:16:15.938 04:38:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.938 04:38:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 04:38:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:15.938 04:38:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.938 04:38:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.938 04:38:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 ************************************ 00:16:15.938 START TEST event 00:16:15.938 ************************************ 00:16:15.938 04:38:23 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:16.198 * Looking for test storage... 00:16:16.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.198 04:38:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.198 04:38:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.198 04:38:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.198 04:38:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.198 04:38:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.198 04:38:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.198 04:38:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.198 04:38:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.198 04:38:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.198 04:38:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.198 04:38:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.198 04:38:23 event -- scripts/common.sh@344 -- # case "$op" in 00:16:16.198 04:38:23 event -- scripts/common.sh@345 -- # : 1 00:16:16.198 04:38:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.198 04:38:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.198 04:38:23 event -- scripts/common.sh@365 -- # decimal 1 00:16:16.198 04:38:23 event -- scripts/common.sh@353 -- # local d=1 00:16:16.198 04:38:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.198 04:38:23 event -- scripts/common.sh@355 -- # echo 1 00:16:16.198 04:38:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.198 04:38:23 event -- scripts/common.sh@366 -- # decimal 2 00:16:16.198 04:38:23 event -- scripts/common.sh@353 -- # local d=2 00:16:16.198 04:38:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.198 04:38:23 event -- scripts/common.sh@355 -- # echo 2 00:16:16.198 04:38:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.198 04:38:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.198 04:38:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.198 04:38:23 event -- scripts/common.sh@368 -- # return 0 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.198 --rc genhtml_branch_coverage=1 00:16:16.198 --rc genhtml_function_coverage=1 00:16:16.198 --rc genhtml_legend=1 00:16:16.198 --rc geninfo_all_blocks=1 00:16:16.198 --rc geninfo_unexecuted_blocks=1 00:16:16.198 00:16:16.198 ' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.198 --rc genhtml_branch_coverage=1 00:16:16.198 --rc genhtml_function_coverage=1 00:16:16.198 --rc genhtml_legend=1 00:16:16.198 --rc geninfo_all_blocks=1 00:16:16.198 --rc geninfo_unexecuted_blocks=1 00:16:16.198 00:16:16.198 ' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.198 --rc genhtml_branch_coverage=1 00:16:16.198 --rc genhtml_function_coverage=1 00:16:16.198 --rc genhtml_legend=1 00:16:16.198 --rc geninfo_all_blocks=1 00:16:16.198 --rc geninfo_unexecuted_blocks=1 00:16:16.198 00:16:16.198 ' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.198 --rc genhtml_branch_coverage=1 00:16:16.198 --rc genhtml_function_coverage=1 00:16:16.198 --rc genhtml_legend=1 00:16:16.198 --rc geninfo_all_blocks=1 00:16:16.198 --rc geninfo_unexecuted_blocks=1 00:16:16.198 00:16:16.198 ' 00:16:16.198 04:38:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:16.198 04:38:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:16.198 04:38:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:16.198 04:38:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.198 04:38:23 event -- common/autotest_common.sh@10 -- # set +x 00:16:16.198 ************************************ 00:16:16.198 START TEST event_perf 00:16:16.198 ************************************ 00:16:16.198 04:38:23 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:16.198 Running I/O for 1 seconds...[2024-11-27 04:38:23.292317] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:16.198 [2024-11-27 04:38:23.292431] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58223 ] 00:16:16.457 [2024-11-27 04:38:23.453342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.457 [2024-11-27 04:38:23.560584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.457 [2024-11-27 04:38:23.560969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.457 [2024-11-27 04:38:23.561307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.457 Running I/O for 1 seconds...[2024-11-27 04:38:23.561334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.839 00:16:17.839 lcore 0: 192533 00:16:17.839 lcore 1: 192534 00:16:17.839 lcore 2: 192532 00:16:17.839 lcore 3: 192530 00:16:17.839 done. 00:16:17.839 00:16:17.839 real 0m1.469s 00:16:17.839 user 0m4.257s 00:16:17.839 sys 0m0.080s 00:16:17.839 04:38:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.839 04:38:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 ************************************ 00:16:17.839 END TEST event_perf 00:16:17.839 ************************************ 00:16:17.839 04:38:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:17.839 04:38:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:17.839 04:38:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.839 04:38:24 event -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 ************************************ 00:16:17.839 START TEST event_reactor 00:16:17.839 ************************************ 00:16:17.839 04:38:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:17.839 [2024-11-27 04:38:24.826410] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:17.839 [2024-11-27 04:38:24.826530] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:16:17.839 [2024-11-27 04:38:24.985933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.101 [2024-11-27 04:38:25.095303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.042 test_start 00:16:19.042 oneshot 00:16:19.042 tick 100 00:16:19.042 tick 100 00:16:19.042 tick 250 00:16:19.042 tick 100 00:16:19.042 tick 100 00:16:19.042 tick 100 00:16:19.043 tick 250 00:16:19.043 tick 500 00:16:19.043 tick 100 00:16:19.043 tick 100 00:16:19.043 tick 250 00:16:19.043 tick 100 00:16:19.043 tick 100 00:16:19.043 test_end 00:16:19.303 00:16:19.303 real 0m1.456s 00:16:19.303 user 0m1.275s 00:16:19.303 sys 0m0.071s 00:16:19.303 04:38:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.303 04:38:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:19.303 ************************************ 00:16:19.303 END TEST event_reactor 00:16:19.303 ************************************ 00:16:19.303 04:38:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:19.303 04:38:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:19.303 04:38:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.303 04:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:16:19.303 ************************************ 00:16:19.303 START TEST event_reactor_perf 00:16:19.303 ************************************ 00:16:19.303 04:38:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:19.303 [2024-11-27 04:38:26.354302] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:19.303 [2024-11-27 04:38:26.354416] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:16:19.563 [2024-11-27 04:38:26.515614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.563 [2024-11-27 04:38:26.617619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.948 test_start 00:16:20.948 test_end 00:16:20.948 Performance: 313877 events per second 00:16:20.948 00:16:20.948 real 0m1.450s 00:16:20.948 user 0m1.274s 00:16:20.948 sys 0m0.067s 00:16:20.948 ************************************ 00:16:20.948 END TEST event_reactor_perf 00:16:20.948 ************************************ 00:16:20.948 04:38:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.948 04:38:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 04:38:27 event -- event/event.sh@49 -- # uname -s 00:16:20.948 04:38:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:20.948 04:38:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:20.948 04:38:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.948 04:38:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.948 04:38:27 event -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 ************************************ 00:16:20.948 START TEST event_scheduler 00:16:20.948 ************************************ 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:20.948 * Looking for test storage... 00:16:20.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.948 04:38:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.948 --rc genhtml_branch_coverage=1 00:16:20.948 --rc genhtml_function_coverage=1 00:16:20.948 --rc genhtml_legend=1 00:16:20.948 --rc geninfo_all_blocks=1 00:16:20.948 --rc geninfo_unexecuted_blocks=1 00:16:20.948 00:16:20.948 ' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.948 --rc genhtml_branch_coverage=1 00:16:20.948 --rc genhtml_function_coverage=1 00:16:20.948 --rc genhtml_legend=1 00:16:20.948 --rc geninfo_all_blocks=1 00:16:20.948 --rc geninfo_unexecuted_blocks=1 00:16:20.948 00:16:20.948 ' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.948 --rc genhtml_branch_coverage=1 00:16:20.948 --rc genhtml_function_coverage=1 00:16:20.948 --rc genhtml_legend=1 00:16:20.948 --rc geninfo_all_blocks=1 00:16:20.948 --rc geninfo_unexecuted_blocks=1 00:16:20.948 00:16:20.948 ' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.948 --rc genhtml_branch_coverage=1 00:16:20.948 --rc genhtml_function_coverage=1 00:16:20.948 --rc genhtml_legend=1 00:16:20.948 --rc geninfo_all_blocks=1 00:16:20.948 --rc geninfo_unexecuted_blocks=1 00:16:20.948 00:16:20.948 ' 00:16:20.948 04:38:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:20.948 04:38:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58375 00:16:20.948 04:38:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:20.948 04:38:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58375 00:16:20.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58375 ']' 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.948 04:38:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.948 04:38:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 [2024-11-27 04:38:28.053613] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:20.948 [2024-11-27 04:38:28.053739] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:16:21.210 [2024-11-27 04:38:28.213461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.210 [2024-11-27 04:38:28.325358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.210 [2024-11-27 04:38:28.325703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.210 [2024-11-27 04:38:28.326278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.210 [2024-11-27 04:38:28.326391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.783 04:38:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.783 04:38:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:16:21.783 04:38:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:21.783 04:38:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.783 04:38:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:21.783 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:21.784 POWER: Cannot set governor of lcore 0 to userspace 00:16:21.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:21.784 POWER: Cannot set governor of lcore 0 to performance 00:16:21.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:21.784 POWER: Cannot set governor of lcore 0 to userspace 00:16:21.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:21.784 POWER: Cannot set governor of lcore 0 to userspace 00:16:21.784 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:16:21.784 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:21.784 POWER: Unable to set Power Management Environment for lcore 0 00:16:21.784 [2024-11-27 04:38:28.895954] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:16:21.784 [2024-11-27 04:38:28.895978] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:16:21.784 [2024-11-27 04:38:28.896000] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:16:21.784 [2024-11-27 04:38:28.896018] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:16:21.784 [2024-11-27 04:38:28.896027] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:16:21.784 [2024-11-27 04:38:28.896037] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:16:21.784 04:38:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.784 04:38:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:21.784 04:38:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.784 04:38:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 [2024-11-27 04:38:29.128611] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:22.045 04:38:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:22.045 04:38:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.045 04:38:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 ************************************ 00:16:22.045 START TEST scheduler_create_thread 00:16:22.045 ************************************ 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 2 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 3 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 4 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 5 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 6 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 7 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 8 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 9 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.045 10 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.045 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:22.306 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.307 04:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 04:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.688 04:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:23.688 04:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:23.688 04:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.688 04:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:24.630 ************************************ 00:16:24.630 END TEST scheduler_create_thread 00:16:24.630 ************************************ 00:16:24.630 04:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.630 00:16:24.630 real 0m2.617s 00:16:24.630 user 0m0.016s 00:16:24.630 sys 0m0.007s 00:16:24.630 04:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.630 04:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:24.630 04:38:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:24.630 04:38:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58375 00:16:24.630 04:38:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58375 ']' 00:16:24.630 04:38:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58375 00:16:24.630 04:38:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:16:24.630 04:38:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.630 04:38:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58375 00:16:24.892 04:38:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:24.892 04:38:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:24.892 killing process with pid 58375 00:16:24.892 04:38:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58375' 00:16:24.892 04:38:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58375 00:16:24.892 04:38:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58375 00:16:25.166 [2024-11-27 04:38:32.244376] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:26.114 00:16:26.114 real 0m5.168s 00:16:26.114 user 0m9.038s 00:16:26.114 sys 0m0.333s 00:16:26.114 04:38:33 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.114 ************************************ 00:16:26.114 END TEST event_scheduler 00:16:26.114 ************************************ 00:16:26.114 04:38:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:26.114 04:38:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:16:26.114 04:38:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:26.114 04:38:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.114 04:38:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.114 04:38:33 event -- common/autotest_common.sh@10 -- # set +x 00:16:26.114 ************************************ 00:16:26.114 START TEST app_repeat 00:16:26.114 ************************************ 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58481 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:26.114 Process app_repeat pid: 58481 00:16:26.114 spdk_app_start Round 0 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58481' 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58481 /var/tmp/spdk-nbd.sock 00:16:26.114 04:38:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58481 ']' 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:26.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.114 04:38:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:26.114 [2024-11-27 04:38:33.112949] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:26.114 [2024-11-27 04:38:33.113082] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58481 ] 00:16:26.114 [2024-11-27 04:38:33.268386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.375 [2024-11-27 04:38:33.373687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.375 [2024-11-27 04:38:33.373842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.947 04:38:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.947 04:38:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:26.947 04:38:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:27.208 Malloc0 00:16:27.209 04:38:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:27.469 Malloc1 00:16:27.469 04:38:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.469 04:38:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:27.731 /dev/nbd0 00:16:27.731 04:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.731 04:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:27.731 1+0 records in 00:16:27.731 1+0 records out 00:16:27.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292168 s, 14.0 MB/s 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.731 04:38:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:27.731 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.731 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.731 04:38:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:27.991 /dev/nbd1 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:27.991 1+0 records in 00:16:27.991 1+0 records out 00:16:27.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534806 s, 7.7 MB/s 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.991 04:38:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.991 04:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.251 04:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:28.251 { 00:16:28.251 "nbd_device": "/dev/nbd0", 00:16:28.251 "bdev_name": "Malloc0" 00:16:28.251 }, 00:16:28.251 { 00:16:28.251 "nbd_device": "/dev/nbd1", 00:16:28.251 "bdev_name": "Malloc1" 00:16:28.251 } 00:16:28.251 ]' 00:16:28.251 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:28.251 { 00:16:28.251 "nbd_device": "/dev/nbd0", 00:16:28.251 "bdev_name": "Malloc0" 00:16:28.251 }, 00:16:28.251 { 00:16:28.251 "nbd_device": "/dev/nbd1", 00:16:28.251 "bdev_name": "Malloc1" 00:16:28.251 } 00:16:28.252 ]' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:28.252 /dev/nbd1' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:28.252 /dev/nbd1' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:28.252 256+0 records in 00:16:28.252 256+0 records out 00:16:28.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00765771 s, 137 MB/s 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:28.252 256+0 records in 00:16:28.252 256+0 records out 00:16:28.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221259 s, 47.4 MB/s 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:28.252 256+0 records in 00:16:28.252 256+0 records out 00:16:28.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212169 s, 49.4 MB/s 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.252 04:38:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.512 04:38:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.772 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:29.032 04:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:29.032 04:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:29.032 04:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:29.032 04:38:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:29.032 04:38:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:29.292 04:38:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:29.865 [2024-11-27 04:38:37.056812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.124 [2024-11-27 04:38:37.158655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.124 [2024-11-27 04:38:37.158790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.124 [2024-11-27 04:38:37.289039] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:30.124 [2024-11-27 04:38:37.289123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:32.689 04:38:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:32.689 spdk_app_start Round 1 00:16:32.689 04:38:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:32.689 04:38:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58481 /var/tmp/spdk-nbd.sock 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58481 ']' 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:32.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.689 04:38:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:32.689 04:38:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:32.689 Malloc0 00:16:32.689 04:38:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:32.950 Malloc1 00:16:32.950 04:38:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.950 04:38:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:33.211 /dev/nbd0 00:16:33.211 04:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.211 04:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:33.211 1+0 records in 00:16:33.211 1+0 records out 00:16:33.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202177 s, 20.3 MB/s 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.211 04:38:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:33.211 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.211 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.211 04:38:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:33.472 /dev/nbd1 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:33.472 1+0 records in 00:16:33.472 1+0 records out 00:16:33.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347673 s, 11.8 MB/s 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.472 04:38:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.472 04:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:33.733 { 00:16:33.733 "nbd_device": "/dev/nbd0", 00:16:33.733 "bdev_name": "Malloc0" 00:16:33.733 }, 00:16:33.733 { 00:16:33.733 "nbd_device": "/dev/nbd1", 00:16:33.733 "bdev_name": "Malloc1" 00:16:33.733 } 00:16:33.733 ]' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:33.733 { 00:16:33.733 "nbd_device": "/dev/nbd0", 00:16:33.733 "bdev_name": "Malloc0" 00:16:33.733 }, 00:16:33.733 { 00:16:33.733 "nbd_device": "/dev/nbd1", 00:16:33.733 "bdev_name": "Malloc1" 00:16:33.733 } 00:16:33.733 ]' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:33.733 /dev/nbd1' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:33.733 /dev/nbd1' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:33.733 256+0 records in 00:16:33.733 256+0 records out 00:16:33.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757241 s, 138 MB/s 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:33.733 04:38:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:33.995 256+0 records in 00:16:33.995 256+0 records out 00:16:33.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163817 s, 64.0 MB/s 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:33.995 256+0 records in 00:16:33.995 256+0 records out 00:16:33.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191193 s, 54.8 MB/s 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.995 04:38:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:33.995 04:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.256 04:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:34.516 04:38:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:34.516 04:38:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:35.088 04:38:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:35.661 [2024-11-27 04:38:42.736793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.661 [2024-11-27 04:38:42.839196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.661 [2024-11-27 04:38:42.839318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.922 [2024-11-27 04:38:42.962619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:35.922 [2024-11-27 04:38:42.962704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:37.841 spdk_app_start Round 2 00:16:37.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:37.841 04:38:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:37.841 04:38:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:37.841 04:38:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58481 /var/tmp/spdk-nbd.sock 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58481 ']' 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.841 04:38:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:38.147 04:38:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.147 04:38:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:38.147 04:38:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:38.407 Malloc0 00:16:38.407 04:38:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:38.667 Malloc1 00:16:38.667 04:38:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.667 04:38:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:38.668 04:38:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:38.668 04:38:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:38.668 04:38:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.668 04:38:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:38.929 /dev/nbd0 00:16:38.929 04:38:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:38.929 04:38:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:38.929 1+0 records in 00:16:38.929 1+0 records out 00:16:38.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391568 s, 10.5 MB/s 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:38.929 04:38:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:38.929 04:38:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.929 04:38:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.929 04:38:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:39.190 /dev/nbd1 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:39.190 1+0 records in 00:16:39.190 1+0 records out 00:16:39.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291453 s, 14.1 MB/s 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.190 04:38:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:39.190 04:38:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:39.190 { 00:16:39.190 "nbd_device": "/dev/nbd0", 00:16:39.190 "bdev_name": "Malloc0" 00:16:39.190 }, 00:16:39.190 { 00:16:39.190 "nbd_device": "/dev/nbd1", 00:16:39.190 "bdev_name": "Malloc1" 00:16:39.190 } 00:16:39.190 ]' 00:16:39.452 04:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:39.453 { 00:16:39.453 "nbd_device": "/dev/nbd0", 00:16:39.453 "bdev_name": "Malloc0" 00:16:39.453 }, 00:16:39.453 { 00:16:39.453 "nbd_device": "/dev/nbd1", 00:16:39.453 "bdev_name": "Malloc1" 00:16:39.453 } 00:16:39.453 ]' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:39.453 /dev/nbd1' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:39.453 /dev/nbd1' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:39.453 256+0 records in 00:16:39.453 256+0 records out 00:16:39.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005922 s, 177 MB/s 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:39.453 256+0 records in 00:16:39.453 256+0 records out 00:16:39.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149226 s, 70.3 MB/s 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:39.453 256+0 records in 00:16:39.453 256+0 records out 00:16:39.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211646 s, 49.5 MB/s 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.453 04:38:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.714 04:38:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.975 04:38:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:39.975 04:38:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:39.975 04:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:39.975 04:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:40.237 04:38:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:40.237 04:38:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:40.498 04:38:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:41.069 [2024-11-27 04:38:48.205033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.327 [2024-11-27 04:38:48.288784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.327 [2024-11-27 04:38:48.288978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.327 [2024-11-27 04:38:48.390758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:41.327 [2024-11-27 04:38:48.390825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:43.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:43.854 04:38:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58481 /var/tmp/spdk-nbd.sock 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58481 ']' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:43.854 04:38:50 event.app_repeat -- event/event.sh@39 -- # killprocess 58481 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58481 ']' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58481 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58481 00:16:43.854 killing process with pid 58481 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58481' 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58481 00:16:43.854 04:38:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58481 00:16:44.113 spdk_app_start is called in Round 0. 00:16:44.113 Shutdown signal received, stop current app iteration 00:16:44.113 Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 reinitialization... 00:16:44.113 spdk_app_start is called in Round 1. 00:16:44.113 Shutdown signal received, stop current app iteration 00:16:44.113 Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 reinitialization... 00:16:44.113 spdk_app_start is called in Round 2. 00:16:44.113 Shutdown signal received, stop current app iteration 00:16:44.113 Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 reinitialization... 00:16:44.113 spdk_app_start is called in Round 3. 00:16:44.113 Shutdown signal received, stop current app iteration 00:16:44.113 ************************************ 00:16:44.113 END TEST app_repeat 00:16:44.113 ************************************ 00:16:44.113 04:38:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:44.113 04:38:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:44.113 00:16:44.113 real 0m18.227s 00:16:44.113 user 0m39.875s 00:16:44.113 sys 0m2.251s 00:16:44.113 04:38:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.113 04:38:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 04:38:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:44.445 04:38:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:44.445 04:38:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.445 04:38:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.445 04:38:51 event -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 ************************************ 00:16:44.445 START TEST cpu_locks 00:16:44.445 ************************************ 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:44.445 * Looking for test storage... 00:16:44.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.445 04:38:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:44.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.445 --rc genhtml_branch_coverage=1 00:16:44.445 --rc genhtml_function_coverage=1 00:16:44.445 --rc genhtml_legend=1 00:16:44.445 --rc geninfo_all_blocks=1 00:16:44.445 --rc geninfo_unexecuted_blocks=1 00:16:44.445 00:16:44.445 ' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:44.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.445 --rc genhtml_branch_coverage=1 00:16:44.445 --rc genhtml_function_coverage=1 00:16:44.445 --rc genhtml_legend=1 00:16:44.445 --rc geninfo_all_blocks=1 00:16:44.445 --rc geninfo_unexecuted_blocks=1 00:16:44.445 00:16:44.445 ' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:44.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.445 --rc genhtml_branch_coverage=1 00:16:44.445 --rc genhtml_function_coverage=1 00:16:44.445 --rc genhtml_legend=1 00:16:44.445 --rc geninfo_all_blocks=1 00:16:44.445 --rc geninfo_unexecuted_blocks=1 00:16:44.445 00:16:44.445 ' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:44.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.445 --rc genhtml_branch_coverage=1 00:16:44.445 --rc genhtml_function_coverage=1 00:16:44.445 --rc genhtml_legend=1 00:16:44.445 --rc geninfo_all_blocks=1 00:16:44.445 --rc geninfo_unexecuted_blocks=1 00:16:44.445 00:16:44.445 ' 00:16:44.445 04:38:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:44.445 04:38:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:44.445 04:38:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:44.445 04:38:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.445 04:38:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 ************************************ 00:16:44.445 START TEST default_locks 00:16:44.445 ************************************ 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58912 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58912 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:44.445 04:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 [2024-11-27 04:38:51.606089] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:44.445 [2024-11-27 04:38:51.606214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:16:44.706 [2024-11-27 04:38:51.763688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.707 [2024-11-27 04:38:51.866941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.278 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.278 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:16:45.278 04:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58912 00:16:45.279 04:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58912 00:16:45.279 04:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58912 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58912 ']' 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58912 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58912 00:16:45.847 killing process with pid 58912 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58912' 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58912 00:16:45.847 04:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58912 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58912 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58912 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58912 00:16:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:47.230 ERROR: process (pid: 58912) is no longer running 00:16:47.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58912) - No such process 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:47.230 00:16:47.230 real 0m2.789s 00:16:47.230 user 0m2.754s 00:16:47.230 sys 0m0.473s 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.230 04:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:47.230 ************************************ 00:16:47.230 END TEST default_locks 00:16:47.230 ************************************ 00:16:47.230 04:38:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:47.230 04:38:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.230 04:38:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.230 04:38:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:47.230 ************************************ 00:16:47.230 START TEST default_locks_via_rpc 00:16:47.230 ************************************ 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58970 00:16:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58970 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58970 ']' 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:47.230 04:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.488 [2024-11-27 04:38:54.457187] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:47.488 [2024-11-27 04:38:54.457772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:16:47.488 [2024-11-27 04:38:54.615772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.748 [2024-11-27 04:38:54.715417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58970 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58970 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58970 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58970 ']' 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58970 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.316 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58970 00:16:48.576 killing process with pid 58970 00:16:48.576 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.576 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.576 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58970' 00:16:48.576 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58970 00:16:48.576 04:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58970 00:16:49.953 00:16:49.953 real 0m2.677s 00:16:49.953 user 0m2.694s 00:16:49.953 sys 0m0.436s 00:16:49.953 04:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.953 ************************************ 00:16:49.953 END TEST default_locks_via_rpc 00:16:49.953 ************************************ 00:16:49.953 04:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.953 04:38:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:49.953 04:38:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.953 04:38:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.953 04:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.953 ************************************ 00:16:49.953 START TEST non_locking_app_on_locked_coremask 00:16:49.953 ************************************ 00:16:49.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59033 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59033 /var/tmp/spdk.sock 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59033 ']' 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.953 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.954 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:49.954 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.954 04:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:50.214 [2024-11-27 04:38:57.192736] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:50.214 [2024-11-27 04:38:57.193020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:16:50.214 [2024-11-27 04:38:57.345867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.477 [2024-11-27 04:38:57.448615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59049 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59049 /var/tmp/spdk2.sock 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59049 ']' 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:51.049 04:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:51.049 [2024-11-27 04:38:58.122042] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:51.049 [2024-11-27 04:38:58.122184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:16:51.316 [2024-11-27 04:38:58.295407] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:51.316 [2024-11-27 04:38:58.295458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.316 [2024-11-27 04:38:58.500246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.696 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.696 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:52.696 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59033 00:16:52.696 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59033 00:16:52.696 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59033 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59033 ']' 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59033 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59033 00:16:52.957 killing process with pid 59033 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59033' 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59033 00:16:52.957 04:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59033 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59049 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59049 ']' 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59049 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59049 00:16:56.303 killing process with pid 59049 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59049' 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59049 00:16:56.303 04:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59049 00:16:57.238 00:16:57.238 real 0m7.063s 00:16:57.238 user 0m7.326s 00:16:57.238 sys 0m0.821s 00:16:57.238 04:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.238 04:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:57.238 ************************************ 00:16:57.238 END TEST non_locking_app_on_locked_coremask 00:16:57.238 ************************************ 00:16:57.238 04:39:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:57.238 04:39:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.239 04:39:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.239 04:39:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 ************************************ 00:16:57.239 START TEST locking_app_on_unlocked_coremask 00:16:57.239 ************************************ 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:16:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59151 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59151 /var/tmp/spdk.sock 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59151 ']' 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.239 04:39:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 [2024-11-27 04:39:04.305887] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:57.239 [2024-11-27 04:39:04.306173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59151 ] 00:16:57.497 [2024-11-27 04:39:04.461725] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:57.497 [2024-11-27 04:39:04.461899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.497 [2024-11-27 04:39:04.553952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59167 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59167 /var/tmp/spdk2.sock 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59167 ']' 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.063 04:39:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 [2024-11-27 04:39:05.268570] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:16:58.320 [2024-11-27 04:39:05.268847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59167 ] 00:16:58.320 [2024-11-27 04:39:05.434693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.589 [2024-11-27 04:39:05.611828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.521 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.521 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:59.521 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59167 00:16:59.521 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59167 00:16:59.521 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59151 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59151 ']' 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59151 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59151 00:16:59.780 killing process with pid 59151 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59151' 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59151 00:16:59.780 04:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59151 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59167 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59167 ']' 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59167 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59167 00:17:02.306 killing process with pid 59167 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59167' 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59167 00:17:02.306 04:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59167 00:17:03.680 ************************************ 00:17:03.680 END TEST locking_app_on_unlocked_coremask 00:17:03.680 ************************************ 00:17:03.680 00:17:03.680 real 0m6.381s 00:17:03.680 user 0m6.731s 00:17:03.680 sys 0m0.813s 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:03.680 04:39:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:03.680 04:39:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.680 04:39:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.680 04:39:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:03.680 ************************************ 00:17:03.680 START TEST locking_app_on_locked_coremask 00:17:03.680 ************************************ 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59258 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59258 /var/tmp/spdk.sock 00:17:03.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59258 ']' 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.680 04:39:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:03.680 [2024-11-27 04:39:10.752725] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:03.680 [2024-11-27 04:39:10.752852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:17:03.938 [2024-11-27 04:39:10.917682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.938 [2024-11-27 04:39:11.020397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59274 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59274 /var/tmp/spdk2.sock 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59274 /var/tmp/spdk2.sock 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:17:04.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59274 /var/tmp/spdk2.sock 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59274 ']' 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.504 04:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:04.504 [2024-11-27 04:39:11.692524] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:04.504 [2024-11-27 04:39:11.692635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59274 ] 00:17:04.762 [2024-11-27 04:39:11.864391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59258 has claimed it. 00:17:04.762 [2024-11-27 04:39:11.864453] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:05.327 ERROR: process (pid: 59274) is no longer running 00:17:05.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59274) - No such process 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59258 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59258 00:17:05.327 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59258 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59258 ']' 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59258 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59258 00:17:05.584 killing process with pid 59258 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59258' 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59258 00:17:05.584 04:39:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59258 00:17:06.958 ************************************ 00:17:06.958 END TEST locking_app_on_locked_coremask 00:17:06.958 ************************************ 00:17:06.958 00:17:06.958 real 0m3.445s 00:17:06.958 user 0m3.665s 00:17:06.958 sys 0m0.562s 00:17:06.958 04:39:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.958 04:39:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:06.958 04:39:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:06.958 04:39:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:06.958 04:39:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.958 04:39:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:07.216 ************************************ 00:17:07.216 START TEST locking_overlapped_coremask 00:17:07.216 ************************************ 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:17:07.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59333 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59333 /var/tmp/spdk.sock 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59333 ']' 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.216 04:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:07.216 [2024-11-27 04:39:14.235903] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:07.216 [2024-11-27 04:39:14.236027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:17:07.216 [2024-11-27 04:39:14.389620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.474 [2024-11-27 04:39:14.493716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.474 [2024-11-27 04:39:14.494076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.474 [2024-11-27 04:39:14.494114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.039 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59351 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59351 /var/tmp/spdk2.sock 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59351 /var/tmp/spdk2.sock 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:17:08.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59351 /var/tmp/spdk2.sock 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59351 ']' 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.040 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:08.040 [2024-11-27 04:39:15.166655] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:08.040 [2024-11-27 04:39:15.166779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59351 ] 00:17:08.297 [2024-11-27 04:39:15.340287] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59333 has claimed it. 00:17:08.297 [2024-11-27 04:39:15.340360] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:08.865 ERROR: process (pid: 59351) is no longer running 00:17:08.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59351) - No such process 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59333 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59333 ']' 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59333 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59333 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59333' 00:17:08.865 killing process with pid 59333 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59333 00:17:08.865 04:39:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59333 00:17:10.262 00:17:10.262 real 0m3.232s 00:17:10.262 user 0m8.778s 00:17:10.262 sys 0m0.428s 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:10.262 ************************************ 00:17:10.262 END TEST locking_overlapped_coremask 00:17:10.262 ************************************ 00:17:10.262 04:39:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:10.262 04:39:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.262 04:39:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.262 04:39:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:10.262 ************************************ 00:17:10.262 START TEST locking_overlapped_coremask_via_rpc 00:17:10.262 ************************************ 00:17:10.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59404 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59404 /var/tmp/spdk.sock 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59404 ']' 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:10.262 04:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.521 [2024-11-27 04:39:17.516972] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:10.521 [2024-11-27 04:39:17.517112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59404 ] 00:17:10.521 [2024-11-27 04:39:17.677325] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:10.521 [2024-11-27 04:39:17.677396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.780 [2024-11-27 04:39:17.783878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.780 [2024-11-27 04:39:17.784154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.780 [2024-11-27 04:39:17.784261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59422 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59422 /var/tmp/spdk2.sock 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59422 ']' 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.347 04:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.347 [2024-11-27 04:39:18.464757] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:11.347 [2024-11-27 04:39:18.465056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59422 ] 00:17:11.605 [2024-11-27 04:39:18.639505] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:11.605 [2024-11-27 04:39:18.639574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.863 [2024-11-27 04:39:18.873513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.863 [2024-11-27 04:39:18.873578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.863 [2024-11-27 04:39:18.873606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:13.237 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 [2024-11-27 04:39:20.075222] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59404 has claimed it. 00:17:13.238 request: 00:17:13.238 { 00:17:13.238 "method": "framework_enable_cpumask_locks", 00:17:13.238 "req_id": 1 00:17:13.238 } 00:17:13.238 Got JSON-RPC error response 00:17:13.238 response: 00:17:13.238 { 00:17:13.238 "code": -32603, 00:17:13.238 "message": "Failed to claim CPU core: 2" 00:17:13.238 } 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59404 /var/tmp/spdk.sock 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59404 ']' 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59422 /var/tmp/spdk2.sock 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59422 ']' 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:13.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.238 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.542 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.542 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:13.542 ************************************ 00:17:13.542 END TEST locking_overlapped_coremask_via_rpc 00:17:13.542 ************************************ 00:17:13.542 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:13.542 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:13.542 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:13.543 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:13.543 00:17:13.543 real 0m3.105s 00:17:13.543 user 0m1.121s 00:17:13.543 sys 0m0.135s 00:17:13.543 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.543 04:39:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.543 04:39:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:17:13.543 04:39:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59404 ]] 00:17:13.543 04:39:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59404 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59404 ']' 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59404 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59404 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59404' 00:17:13.543 killing process with pid 59404 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59404 00:17:13.543 04:39:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59404 00:17:14.915 04:39:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59422 ]] 00:17:14.915 04:39:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59422 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59422 ']' 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59422 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59422 00:17:14.915 killing process with pid 59422 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59422' 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59422 00:17:14.915 04:39:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59422 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59404 ]] 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59404 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59404 ']' 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59404 00:17:16.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59404) - No such process 00:17:16.289 Process with pid 59404 is not found 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59404 is not found' 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59422 ]] 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59422 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59422 ']' 00:17:16.289 Process with pid 59422 is not found 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59422 00:17:16.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59422) - No such process 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59422 is not found' 00:17:16.289 04:39:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:16.289 00:17:16.289 real 0m31.743s 00:17:16.289 user 0m54.356s 00:17:16.289 sys 0m4.488s 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.289 04:39:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:16.289 ************************************ 00:17:16.289 END TEST cpu_locks 00:17:16.289 ************************************ 00:17:16.289 ************************************ 00:17:16.289 END TEST event 00:17:16.289 ************************************ 00:17:16.289 00:17:16.289 real 1m0.044s 00:17:16.289 user 1m50.235s 00:17:16.289 sys 0m7.532s 00:17:16.289 04:39:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.289 04:39:23 event -- common/autotest_common.sh@10 -- # set +x 00:17:16.289 04:39:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:16.289 04:39:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.289 04:39:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.289 04:39:23 -- common/autotest_common.sh@10 -- # set +x 00:17:16.289 ************************************ 00:17:16.289 START TEST thread 00:17:16.289 ************************************ 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:16.289 * Looking for test storage... 00:17:16.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:16.289 04:39:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.289 04:39:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.289 04:39:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.289 04:39:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.289 04:39:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.289 04:39:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.289 04:39:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.289 04:39:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.289 04:39:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.289 04:39:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.289 04:39:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.289 04:39:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:17:16.289 04:39:23 thread -- scripts/common.sh@345 -- # : 1 00:17:16.289 04:39:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.289 04:39:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.289 04:39:23 thread -- scripts/common.sh@365 -- # decimal 1 00:17:16.289 04:39:23 thread -- scripts/common.sh@353 -- # local d=1 00:17:16.289 04:39:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.289 04:39:23 thread -- scripts/common.sh@355 -- # echo 1 00:17:16.289 04:39:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.289 04:39:23 thread -- scripts/common.sh@366 -- # decimal 2 00:17:16.289 04:39:23 thread -- scripts/common.sh@353 -- # local d=2 00:17:16.289 04:39:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.289 04:39:23 thread -- scripts/common.sh@355 -- # echo 2 00:17:16.289 04:39:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.289 04:39:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.289 04:39:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.289 04:39:23 thread -- scripts/common.sh@368 -- # return 0 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:16.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.289 --rc genhtml_branch_coverage=1 00:17:16.289 --rc genhtml_function_coverage=1 00:17:16.289 --rc genhtml_legend=1 00:17:16.289 --rc geninfo_all_blocks=1 00:17:16.289 --rc geninfo_unexecuted_blocks=1 00:17:16.289 00:17:16.289 ' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:16.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.289 --rc genhtml_branch_coverage=1 00:17:16.289 --rc genhtml_function_coverage=1 00:17:16.289 --rc genhtml_legend=1 00:17:16.289 --rc geninfo_all_blocks=1 00:17:16.289 --rc geninfo_unexecuted_blocks=1 00:17:16.289 00:17:16.289 ' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:16.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.289 --rc genhtml_branch_coverage=1 00:17:16.289 --rc genhtml_function_coverage=1 00:17:16.289 --rc genhtml_legend=1 00:17:16.289 --rc geninfo_all_blocks=1 00:17:16.289 --rc geninfo_unexecuted_blocks=1 00:17:16.289 00:17:16.289 ' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:16.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.289 --rc genhtml_branch_coverage=1 00:17:16.289 --rc genhtml_function_coverage=1 00:17:16.289 --rc genhtml_legend=1 00:17:16.289 --rc geninfo_all_blocks=1 00:17:16.289 --rc geninfo_unexecuted_blocks=1 00:17:16.289 00:17:16.289 ' 00:17:16.289 04:39:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.289 04:39:23 thread -- common/autotest_common.sh@10 -- # set +x 00:17:16.289 ************************************ 00:17:16.289 START TEST thread_poller_perf 00:17:16.289 ************************************ 00:17:16.289 04:39:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:16.290 [2024-11-27 04:39:23.370916] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:16.290 [2024-11-27 04:39:23.371037] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:17:16.547 [2024-11-27 04:39:23.529461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.547 [2024-11-27 04:39:23.628486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.547 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:17.924 [2024-11-27T04:39:25.127Z] ====================================== 00:17:17.924 [2024-11-27T04:39:25.127Z] busy:2610041304 (cyc) 00:17:17.924 [2024-11-27T04:39:25.127Z] total_run_count: 303000 00:17:17.924 [2024-11-27T04:39:25.127Z] tsc_hz: 2600000000 (cyc) 00:17:17.924 [2024-11-27T04:39:25.127Z] ====================================== 00:17:17.924 [2024-11-27T04:39:25.127Z] poller_cost: 8613 (cyc), 3312 (nsec) 00:17:17.924 00:17:17.924 real 0m1.458s 00:17:17.924 user 0m1.281s 00:17:17.924 sys 0m0.067s 00:17:17.924 04:39:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.924 ************************************ 00:17:17.924 END TEST thread_poller_perf 00:17:17.924 ************************************ 00:17:17.924 04:39:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:17.924 04:39:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:17.924 04:39:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:17:17.924 04:39:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.924 04:39:24 thread -- common/autotest_common.sh@10 -- # set +x 00:17:17.924 ************************************ 00:17:17.924 START TEST thread_poller_perf 00:17:17.924 ************************************ 00:17:17.924 04:39:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:17.924 [2024-11-27 04:39:24.861517] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:17.924 [2024-11-27 04:39:24.861655] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:17:17.924 [2024-11-27 04:39:25.023609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.924 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:17.924 [2024-11-27 04:39:25.122924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.347 [2024-11-27T04:39:26.550Z] ====================================== 00:17:19.347 [2024-11-27T04:39:26.550Z] busy:2603326748 (cyc) 00:17:19.347 [2024-11-27T04:39:26.550Z] total_run_count: 3774000 00:17:19.347 [2024-11-27T04:39:26.550Z] tsc_hz: 2600000000 (cyc) 00:17:19.347 [2024-11-27T04:39:26.550Z] ====================================== 00:17:19.347 [2024-11-27T04:39:26.550Z] poller_cost: 689 (cyc), 265 (nsec) 00:17:19.347 ************************************ 00:17:19.347 END TEST thread_poller_perf 00:17:19.347 00:17:19.347 real 0m1.451s 00:17:19.347 user 0m1.281s 00:17:19.347 sys 0m0.061s 00:17:19.347 04:39:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.347 04:39:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:19.347 ************************************ 00:17:19.347 04:39:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:19.347 ************************************ 00:17:19.347 END TEST thread 00:17:19.347 ************************************ 00:17:19.347 00:17:19.347 real 0m3.131s 00:17:19.347 user 0m2.683s 00:17:19.347 sys 0m0.233s 00:17:19.347 04:39:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.347 04:39:26 thread -- common/autotest_common.sh@10 -- # set +x 00:17:19.347 04:39:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:17:19.347 04:39:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:19.347 04:39:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.347 04:39:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.347 04:39:26 -- common/autotest_common.sh@10 -- # set +x 00:17:19.347 ************************************ 00:17:19.347 START TEST app_cmdline 00:17:19.347 ************************************ 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:19.347 * Looking for test storage... 00:17:19.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:17:19.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.347 04:39:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.347 --rc genhtml_branch_coverage=1 00:17:19.347 --rc genhtml_function_coverage=1 00:17:19.347 --rc genhtml_legend=1 00:17:19.347 --rc geninfo_all_blocks=1 00:17:19.347 --rc geninfo_unexecuted_blocks=1 00:17:19.347 00:17:19.347 ' 00:17:19.347 04:39:26 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.347 --rc genhtml_branch_coverage=1 00:17:19.347 --rc genhtml_function_coverage=1 00:17:19.347 --rc genhtml_legend=1 00:17:19.347 --rc geninfo_all_blocks=1 00:17:19.348 --rc geninfo_unexecuted_blocks=1 00:17:19.348 00:17:19.348 ' 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.348 --rc genhtml_branch_coverage=1 00:17:19.348 --rc genhtml_function_coverage=1 00:17:19.348 --rc genhtml_legend=1 00:17:19.348 --rc geninfo_all_blocks=1 00:17:19.348 --rc geninfo_unexecuted_blocks=1 00:17:19.348 00:17:19.348 ' 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.348 --rc genhtml_branch_coverage=1 00:17:19.348 --rc genhtml_function_coverage=1 00:17:19.348 --rc genhtml_legend=1 00:17:19.348 --rc geninfo_all_blocks=1 00:17:19.348 --rc geninfo_unexecuted_blocks=1 00:17:19.348 00:17:19.348 ' 00:17:19.348 04:39:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:19.348 04:39:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59702 00:17:19.348 04:39:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59702 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59702 ']' 00:17:19.348 04:39:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.348 04:39:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:19.348 [2024-11-27 04:39:26.533896] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:19.348 [2024-11-27 04:39:26.534151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59702 ] 00:17:19.604 [2024-11-27 04:39:26.683013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.604 [2024-11-27 04:39:26.773580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.538 04:39:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.538 04:39:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:17:20.538 04:39:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:20.538 { 00:17:20.538 "version": "SPDK v25.01-pre git sha1 78decfef6", 00:17:20.538 "fields": { 00:17:20.538 "major": 25, 00:17:20.538 "minor": 1, 00:17:20.538 "patch": 0, 00:17:20.538 "suffix": "-pre", 00:17:20.538 "commit": "78decfef6" 00:17:20.538 } 00:17:20.538 } 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:20.539 04:39:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:20.539 04:39:27 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:20.796 request: 00:17:20.796 { 00:17:20.796 "method": "env_dpdk_get_mem_stats", 00:17:20.796 "req_id": 1 00:17:20.796 } 00:17:20.796 Got JSON-RPC error response 00:17:20.796 response: 00:17:20.796 { 00:17:20.796 "code": -32601, 00:17:20.796 "message": "Method not found" 00:17:20.796 } 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.796 04:39:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59702 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59702 ']' 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59702 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59702 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.796 killing process with pid 59702 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59702' 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 59702 00:17:20.796 04:39:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 59702 00:17:22.171 00:17:22.171 real 0m2.732s 00:17:22.171 user 0m3.089s 00:17:22.171 sys 0m0.397s 00:17:22.171 04:39:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.171 ************************************ 00:17:22.171 04:39:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:22.171 END TEST app_cmdline 00:17:22.171 ************************************ 00:17:22.171 04:39:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:22.171 04:39:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:22.171 04:39:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.171 04:39:29 -- common/autotest_common.sh@10 -- # set +x 00:17:22.171 ************************************ 00:17:22.171 START TEST version 00:17:22.171 ************************************ 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:22.171 * Looking for test storage... 00:17:22.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.171 04:39:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.171 04:39:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.171 04:39:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.171 04:39:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.171 04:39:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.171 04:39:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.171 04:39:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.171 04:39:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.171 04:39:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.171 04:39:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.171 04:39:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.171 04:39:29 version -- scripts/common.sh@344 -- # case "$op" in 00:17:22.171 04:39:29 version -- scripts/common.sh@345 -- # : 1 00:17:22.171 04:39:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.171 04:39:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.171 04:39:29 version -- scripts/common.sh@365 -- # decimal 1 00:17:22.171 04:39:29 version -- scripts/common.sh@353 -- # local d=1 00:17:22.171 04:39:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.171 04:39:29 version -- scripts/common.sh@355 -- # echo 1 00:17:22.171 04:39:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.171 04:39:29 version -- scripts/common.sh@366 -- # decimal 2 00:17:22.171 04:39:29 version -- scripts/common.sh@353 -- # local d=2 00:17:22.171 04:39:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.171 04:39:29 version -- scripts/common.sh@355 -- # echo 2 00:17:22.171 04:39:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.171 04:39:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.171 04:39:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.171 04:39:29 version -- scripts/common.sh@368 -- # return 0 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.171 --rc genhtml_branch_coverage=1 00:17:22.171 --rc genhtml_function_coverage=1 00:17:22.171 --rc genhtml_legend=1 00:17:22.171 --rc geninfo_all_blocks=1 00:17:22.171 --rc geninfo_unexecuted_blocks=1 00:17:22.171 00:17:22.171 ' 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.171 --rc genhtml_branch_coverage=1 00:17:22.171 --rc genhtml_function_coverage=1 00:17:22.171 --rc genhtml_legend=1 00:17:22.171 --rc geninfo_all_blocks=1 00:17:22.171 --rc geninfo_unexecuted_blocks=1 00:17:22.171 00:17:22.171 ' 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.171 --rc genhtml_branch_coverage=1 00:17:22.171 --rc genhtml_function_coverage=1 00:17:22.171 --rc genhtml_legend=1 00:17:22.171 --rc geninfo_all_blocks=1 00:17:22.171 --rc geninfo_unexecuted_blocks=1 00:17:22.171 00:17:22.171 ' 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.171 --rc genhtml_branch_coverage=1 00:17:22.171 --rc genhtml_function_coverage=1 00:17:22.171 --rc genhtml_legend=1 00:17:22.171 --rc geninfo_all_blocks=1 00:17:22.171 --rc geninfo_unexecuted_blocks=1 00:17:22.171 00:17:22.171 ' 00:17:22.171 04:39:29 version -- app/version.sh@17 -- # get_header_version major 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # cut -f2 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:22.171 04:39:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:22.171 04:39:29 version -- app/version.sh@17 -- # major=25 00:17:22.171 04:39:29 version -- app/version.sh@18 -- # get_header_version minor 00:17:22.171 04:39:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # cut -f2 00:17:22.171 04:39:29 version -- app/version.sh@18 -- # minor=1 00:17:22.171 04:39:29 version -- app/version.sh@19 -- # get_header_version patch 00:17:22.171 04:39:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # cut -f2 00:17:22.171 04:39:29 version -- app/version.sh@19 -- # patch=0 00:17:22.171 04:39:29 version -- app/version.sh@20 -- # get_header_version suffix 00:17:22.171 04:39:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # cut -f2 00:17:22.171 04:39:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:22.171 04:39:29 version -- app/version.sh@20 -- # suffix=-pre 00:17:22.171 04:39:29 version -- app/version.sh@22 -- # version=25.1 00:17:22.171 04:39:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:22.171 04:39:29 version -- app/version.sh@28 -- # version=25.1rc0 00:17:22.171 04:39:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:22.171 04:39:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:22.171 04:39:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:17:22.171 04:39:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:17:22.171 00:17:22.171 real 0m0.189s 00:17:22.171 user 0m0.124s 00:17:22.171 sys 0m0.090s 00:17:22.171 04:39:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.171 ************************************ 00:17:22.171 END TEST version 00:17:22.171 ************************************ 00:17:22.171 04:39:29 version -- common/autotest_common.sh@10 -- # set +x 00:17:22.171 04:39:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:17:22.171 04:39:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:17:22.171 04:39:29 -- spdk/autotest.sh@194 -- # uname -s 00:17:22.171 04:39:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:22.171 04:39:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:22.171 04:39:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:22.171 04:39:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:17:22.171 04:39:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:22.171 04:39:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.171 04:39:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.171 04:39:29 -- common/autotest_common.sh@10 -- # set +x 00:17:22.171 ************************************ 00:17:22.171 START TEST blockdev_nvme 00:17:22.171 ************************************ 00:17:22.171 04:39:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:22.429 * Looking for test storage... 00:17:22.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:22.429 04:39:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.430 04:39:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.430 --rc genhtml_branch_coverage=1 00:17:22.430 --rc genhtml_function_coverage=1 00:17:22.430 --rc genhtml_legend=1 00:17:22.430 --rc geninfo_all_blocks=1 00:17:22.430 --rc geninfo_unexecuted_blocks=1 00:17:22.430 00:17:22.430 ' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.430 --rc genhtml_branch_coverage=1 00:17:22.430 --rc genhtml_function_coverage=1 00:17:22.430 --rc genhtml_legend=1 00:17:22.430 --rc geninfo_all_blocks=1 00:17:22.430 --rc geninfo_unexecuted_blocks=1 00:17:22.430 00:17:22.430 ' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.430 --rc genhtml_branch_coverage=1 00:17:22.430 --rc genhtml_function_coverage=1 00:17:22.430 --rc genhtml_legend=1 00:17:22.430 --rc geninfo_all_blocks=1 00:17:22.430 --rc geninfo_unexecuted_blocks=1 00:17:22.430 00:17:22.430 ' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.430 --rc genhtml_branch_coverage=1 00:17:22.430 --rc genhtml_function_coverage=1 00:17:22.430 --rc genhtml_legend=1 00:17:22.430 --rc geninfo_all_blocks=1 00:17:22.430 --rc geninfo_unexecuted_blocks=1 00:17:22.430 00:17:22.430 ' 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:22.430 04:39:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59874 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59874 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59874 ']' 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.430 04:39:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.430 04:39:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:22.430 [2024-11-27 04:39:29.584938] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:22.430 [2024-11-27 04:39:29.585325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59874 ] 00:17:22.688 [2024-11-27 04:39:29.742770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.688 [2024-11-27 04:39:29.842623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.253 04:39:30 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.253 04:39:30 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:17:23.253 04:39:30 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:23.253 04:39:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:17:23.253 04:39:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:17:23.253 04:39:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:17:23.253 04:39:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:23.513 04:39:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:17:23.513 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.513 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.774 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.774 04:39:30 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:23.774 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.774 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dd6a5407-f43c-444a-98b1-2bedcc754575"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dd6a5407-f43c-444a-98b1-2bedcc754575",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "76c454c0-a435-4fab-a197-b6d0921216a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "76c454c0-a435-4fab-a197-b6d0921216a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e25c4926-56e8-4d71-9b59-604417cee1e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e25c4926-56e8-4d71-9b59-604417cee1e6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a9ebddbe-bcf2-4810-9192-92dc89958de1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9ebddbe-bcf2-4810-9192-92dc89958de1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9511a86c-b7bc-4e91-a1f6-f9dcff8a064d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9511a86c-b7bc-4e91-a1f6-f9dcff8a064d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1daf0e79-5a4d-485d-b7f9-94155757e51c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1daf0e79-5a4d-485d-b7f9-94155757e51c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:23.775 04:39:30 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59874 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59874 ']' 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59874 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59874 00:17:23.775 killing process with pid 59874 00:17:23.775 04:39:30 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.776 04:39:30 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.776 04:39:30 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59874' 00:17:23.776 04:39:30 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59874 00:17:23.776 04:39:30 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59874 00:17:25.683 04:39:32 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.683 04:39:32 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:25.683 04:39:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:25.683 04:39:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.683 04:39:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.683 ************************************ 00:17:25.683 START TEST bdev_hello_world 00:17:25.683 ************************************ 00:17:25.683 04:39:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:25.683 [2024-11-27 04:39:32.514909] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:25.683 [2024-11-27 04:39:32.515182] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:17:25.683 [2024-11-27 04:39:32.673001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.683 [2024-11-27 04:39:32.772154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.250 [2024-11-27 04:39:33.308954] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:26.250 [2024-11-27 04:39:33.309169] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:17:26.250 [2024-11-27 04:39:33.309201] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:26.250 [2024-11-27 04:39:33.311759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:26.250 [2024-11-27 04:39:33.312396] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:26.250 [2024-11-27 04:39:33.312422] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:26.250 [2024-11-27 04:39:33.313115] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:26.250 00:17:26.250 [2024-11-27 04:39:33.313145] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:27.189 00:17:27.189 real 0m1.593s 00:17:27.189 user 0m1.303s 00:17:27.189 sys 0m0.181s 00:17:27.189 04:39:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.189 ************************************ 00:17:27.189 04:39:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:27.189 END TEST bdev_hello_world 00:17:27.189 ************************************ 00:17:27.189 04:39:34 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:27.189 04:39:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.189 04:39:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.189 04:39:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.189 ************************************ 00:17:27.189 START TEST bdev_bounds 00:17:27.189 ************************************ 00:17:27.189 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:27.189 Process bdevio pid: 59989 00:17:27.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59989 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59989' 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59989 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59989 ']' 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.190 04:39:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:27.190 [2024-11-27 04:39:34.178838] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:27.190 [2024-11-27 04:39:34.179098] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59989 ] 00:17:27.190 [2024-11-27 04:39:34.340734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.450 [2024-11-27 04:39:34.447230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.450 [2024-11-27 04:39:34.447625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.450 [2024-11-27 04:39:34.447803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.019 04:39:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.019 04:39:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:28.019 04:39:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:28.019 I/O targets: 00:17:28.019 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:28.019 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:28.019 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:28.019 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:28.019 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:28.019 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:28.019 00:17:28.019 00:17:28.019 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.019 http://cunit.sourceforge.net/ 00:17:28.019 00:17:28.019 00:17:28.019 Suite: bdevio tests on: Nvme3n1 00:17:28.019 Test: blockdev write read block ...passed 00:17:28.019 Test: blockdev write zeroes read block ...passed 00:17:28.019 Test: blockdev write zeroes read no split ...passed 00:17:28.019 Test: blockdev write zeroes read split ...passed 00:17:28.280 Test: blockdev write zeroes read split partial ...passed 00:17:28.280 Test: blockdev reset ...[2024-11-27 04:39:35.225868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:17:28.280 [2024-11-27 04:39:35.229062] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:17:28.280 Test: blockdev write read 8 blocks ...uccessful. 00:17:28.280 passed 00:17:28.280 Test: blockdev write read size > 128k ...passed 00:17:28.280 Test: blockdev write read invalid size ...passed 00:17:28.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.280 Test: blockdev write read max offset ...passed 00:17:28.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.280 Test: blockdev writev readv 8 blocks ...passed 00:17:28.280 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.280 Test: blockdev writev readv block ...passed 00:17:28.281 Test: blockdev writev readv size > 128k ...passed 00:17:28.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.281 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.249786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc80a000 len:0x1000 00:17:28.281 [2024-11-27 04:39:35.249937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:28.281 passed 00:17:28.281 Test: blockdev nvme passthru rw ...passed 00:17:28.281 Test: blockdev nvme passthru vendor specific ...passed 00:17:28.281 Test: blockdev nvme admin passthru ...[2024-11-27 04:39:35.252427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:28.281 [2024-11-27 04:39:35.252467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:28.281 passed 00:17:28.281 Test: blockdev copy ...passed 00:17:28.281 Suite: bdevio tests on: Nvme2n3 00:17:28.281 Test: blockdev write read block ...passed 00:17:28.281 Test: blockdev write zeroes read block ...passed 00:17:28.281 Test: blockdev write zeroes read no split ...passed 00:17:28.281 Test: blockdev write zeroes read split ...passed 00:17:28.281 Test: blockdev write zeroes read split partial ...passed 00:17:28.281 Test: blockdev reset ...[2024-11-27 04:39:35.354857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:28.281 [2024-11-27 04:39:35.358606] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:17:28.281 passed 00:17:28.281 Test: blockdev write read 8 blocks ...passed 00:17:28.281 Test: blockdev write read size > 128k ...passed 00:17:28.281 Test: blockdev write read invalid size ...passed 00:17:28.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.281 Test: blockdev write read max offset ...passed 00:17:28.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.281 Test: blockdev writev readv 8 blocks ...passed 00:17:28.281 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.281 Test: blockdev writev readv block ...passed 00:17:28.281 Test: blockdev writev readv size > 128k ...passed 00:17:28.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.281 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.379932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298006000 len:0x1000 00:17:28.281 [2024-11-27 04:39:35.379984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:28.281 passed 00:17:28.281 Test: blockdev nvme passthru rw ...passed 00:17:28.281 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:39:35.383088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:28.281 [2024-11-27 04:39:35.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:28.281 passed 00:17:28.281 Test: blockdev nvme admin passthru ...passed 00:17:28.281 Test: blockdev copy ...passed 00:17:28.281 Suite: bdevio tests on: Nvme2n2 00:17:28.281 Test: blockdev write read block ...passed 00:17:28.281 Test: blockdev write zeroes read block ...passed 00:17:28.281 Test: blockdev write zeroes read no split ...passed 00:17:28.281 Test: blockdev write zeroes read split ...passed 00:17:28.281 Test: blockdev write zeroes read split partial ...passed 00:17:28.281 Test: blockdev reset ...[2024-11-27 04:39:35.480943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:28.543 [2024-11-27 04:39:35.486665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:17:28.543 Test: blockdev write read 8 blocks ...uccessful. 00:17:28.543 passed 00:17:28.543 Test: blockdev write read size > 128k ...passed 00:17:28.543 Test: blockdev write read invalid size ...passed 00:17:28.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.543 Test: blockdev write read max offset ...passed 00:17:28.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.543 Test: blockdev writev readv 8 blocks ...passed 00:17:28.543 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.543 Test: blockdev writev readv block ...passed 00:17:28.543 Test: blockdev writev readv size > 128k ...passed 00:17:28.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.543 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.507008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca83c000 len:0x1000 00:17:28.543 [2024-11-27 04:39:35.507173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:28.543 passed 00:17:28.543 Test: blockdev nvme passthru rw ...passed 00:17:28.543 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:39:35.509930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:28.543 [2024-11-27 04:39:35.510008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:17:28.543 00:17:28.543 Test: blockdev nvme admin passthru ...passed 00:17:28.543 Test: blockdev copy ...passed 00:17:28.543 Suite: bdevio tests on: Nvme2n1 00:17:28.543 Test: blockdev write read block ...passed 00:17:28.543 Test: blockdev write zeroes read block ...passed 00:17:28.543 Test: blockdev write zeroes read no split ...passed 00:17:28.543 Test: blockdev write zeroes read split ...passed 00:17:28.543 Test: blockdev write zeroes read split partial ...passed 00:17:28.543 Test: blockdev reset ...[2024-11-27 04:39:35.610548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:28.543 [2024-11-27 04:39:35.615141] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:17:28.543 00:17:28.543 Test: blockdev write read 8 blocks ...passed 00:17:28.543 Test: blockdev write read size > 128k ...passed 00:17:28.543 Test: blockdev write read invalid size ...passed 00:17:28.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.543 Test: blockdev write read max offset ...passed 00:17:28.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.543 Test: blockdev writev readv 8 blocks ...passed 00:17:28.543 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.543 Test: blockdev writev readv block ...passed 00:17:28.543 Test: blockdev writev readv size > 128k ...passed 00:17:28.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.543 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.634235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca838000 len:0x1000 00:17:28.543 [2024-11-27 04:39:35.634402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:28.543 passed 00:17:28.543 Test: blockdev nvme passthru rw ...passed 00:17:28.543 Test: blockdev nvme passthru vendor specific ...passed 00:17:28.543 Test: blockdev nvme admin passthru ...[2024-11-27 04:39:35.637865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:28.543 [2024-11-27 04:39:35.637944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:28.543 passed 00:17:28.543 Test: blockdev copy ...passed 00:17:28.543 Suite: bdevio tests on: Nvme1n1 00:17:28.543 Test: blockdev write read block ...passed 00:17:28.543 Test: blockdev write zeroes read block ...passed 00:17:28.543 Test: blockdev write zeroes read no split ...passed 00:17:28.543 Test: blockdev write zeroes read split ...passed 00:17:28.543 Test: blockdev write zeroes read split partial ...passed 00:17:28.543 Test: blockdev reset ...[2024-11-27 04:39:35.742441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:17:28.804 [2024-11-27 04:39:35.746392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:17:28.804 Test: blockdev write read 8 blocks ...uccessful. 00:17:28.804 passed 00:17:28.804 Test: blockdev write read size > 128k ...passed 00:17:28.804 Test: blockdev write read invalid size ...passed 00:17:28.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.804 Test: blockdev write read max offset ...passed 00:17:28.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.804 Test: blockdev writev readv 8 blocks ...passed 00:17:28.804 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.804 Test: blockdev writev readv block ...passed 00:17:28.804 Test: blockdev writev readv size > 128k ...passed 00:17:28.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.804 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.764260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:17:28.804 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca834000 len:0x1000 00:17:28.804 [2024-11-27 04:39:35.764417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:28.804 passed 00:17:28.804 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:39:35.766584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:28.804 [2024-11-27 04:39:35.766622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:28.804 passed 00:17:28.804 Test: blockdev nvme admin passthru ...passed 00:17:28.804 Test: blockdev copy ...passed 00:17:28.804 Suite: bdevio tests on: Nvme0n1 00:17:28.804 Test: blockdev write read block ...passed 00:17:28.804 Test: blockdev write zeroes read block ...passed 00:17:28.804 Test: blockdev write zeroes read no split ...passed 00:17:28.804 Test: blockdev write zeroes read split ...passed 00:17:28.804 Test: blockdev write zeroes read split partial ...passed 00:17:28.804 Test: blockdev reset ...[2024-11-27 04:39:35.959057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:17:28.804 [2024-11-27 04:39:35.962188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:17:28.804 passed 00:17:28.804 Test: blockdev write read 8 blocks ...passed 00:17:28.804 Test: blockdev write read size > 128k ...passed 00:17:28.804 Test: blockdev write read invalid size ...passed 00:17:28.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.804 Test: blockdev write read max offset ...passed 00:17:28.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.804 Test: blockdev writev readv 8 blocks ...passed 00:17:28.804 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.804 Test: blockdev writev readv block ...passed 00:17:28.804 Test: blockdev writev readv size > 128k ...passed 00:17:28.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.804 Test: blockdev comparev and writev ...[2024-11-27 04:39:35.972452] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:17:28.804 separate metadata which is not supported yet. 00:17:28.804 passed 00:17:28.804 Test: blockdev nvme passthru rw ...passed 00:17:28.804 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:39:35.973260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:17:28.804 Test: blockdev nvme admin passthru ...RP2 0x0 00:17:28.804 [2024-11-27 04:39:35.973433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:17:28.804 passed 00:17:28.804 Test: blockdev copy ...passed 00:17:28.804 00:17:28.804 Run Summary: Type Total Ran Passed Failed Inactive 00:17:28.804 suites 6 6 n/a 0 0 00:17:28.804 tests 138 138 138 0 0 00:17:28.804 asserts 893 893 893 0 n/a 00:17:28.804 00:17:28.804 Elapsed time = 2.030 seconds 00:17:28.804 0 00:17:28.804 04:39:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59989 00:17:28.804 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59989 ']' 00:17:28.804 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59989 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59989 00:17:29.063 killing process with pid 59989 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59989' 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59989 00:17:29.063 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59989 00:17:29.995 ************************************ 00:17:29.995 END TEST bdev_bounds 00:17:29.995 ************************************ 00:17:29.995 04:39:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:29.995 00:17:29.995 real 0m2.769s 00:17:29.995 user 0m6.892s 00:17:29.995 sys 0m0.301s 00:17:29.995 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.995 04:39:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:29.995 04:39:36 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:29.995 04:39:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:29.995 04:39:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.995 04:39:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:29.995 ************************************ 00:17:29.995 START TEST bdev_nbd 00:17:29.995 ************************************ 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60054 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60054 /var/tmp/spdk-nbd.sock 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60054 ']' 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:29.995 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.996 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:29.996 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.996 04:39:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:29.996 04:39:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:29.996 [2024-11-27 04:39:36.994962] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:29.996 [2024-11-27 04:39:36.995248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.996 [2024-11-27 04:39:37.155281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.253 [2024-11-27 04:39:37.257487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:30.832 04:39:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.091 1+0 records in 00:17:31.091 1+0 records out 00:17:31.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420686 s, 9.7 MB/s 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:31.091 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.352 1+0 records in 00:17:31.352 1+0 records out 00:17:31.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161871 s, 2.5 MB/s 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.352 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.612 1+0 records in 00:17:31.612 1+0 records out 00:17:31.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011369 s, 3.6 MB/s 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:31.612 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.613 1+0 records in 00:17:31.613 1+0 records out 00:17:31.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778529 s, 5.3 MB/s 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:31.613 04:39:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:17:31.872 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.873 1+0 records in 00:17:31.873 1+0 records out 00:17:31.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070096 s, 5.8 MB/s 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:31.873 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.130 1+0 records in 00:17:32.130 1+0 records out 00:17:32.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044344 s, 9.2 MB/s 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:32.130 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd0", 00:17:32.387 "bdev_name": "Nvme0n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd1", 00:17:32.387 "bdev_name": "Nvme1n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd2", 00:17:32.387 "bdev_name": "Nvme2n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd3", 00:17:32.387 "bdev_name": "Nvme2n2" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd4", 00:17:32.387 "bdev_name": "Nvme2n3" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd5", 00:17:32.387 "bdev_name": "Nvme3n1" 00:17:32.387 } 00:17:32.387 ]' 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd0", 00:17:32.387 "bdev_name": "Nvme0n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd1", 00:17:32.387 "bdev_name": "Nvme1n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd2", 00:17:32.387 "bdev_name": "Nvme2n1" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd3", 00:17:32.387 "bdev_name": "Nvme2n2" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd4", 00:17:32.387 "bdev_name": "Nvme2n3" 00:17:32.387 }, 00:17:32.387 { 00:17:32.387 "nbd_device": "/dev/nbd5", 00:17:32.387 "bdev_name": "Nvme3n1" 00:17:32.387 } 00:17:32.387 ]' 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.387 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.645 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.904 04:39:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:33.165 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.166 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.166 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.426 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.687 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:33.948 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:33.948 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:33.948 04:39:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:33.948 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:17:34.209 /dev/nbd0 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.209 1+0 records in 00:17:34.209 1+0 records out 00:17:34.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112677 s, 3.6 MB/s 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:34.209 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:17:34.470 /dev/nbd1 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:34.470 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.471 1+0 records in 00:17:34.471 1+0 records out 00:17:34.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055362 s, 7.4 MB/s 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:34.471 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:17:34.732 /dev/nbd10 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.732 1+0 records in 00:17:34.732 1+0 records out 00:17:34.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459115 s, 8.9 MB/s 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:34.732 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:17:34.996 /dev/nbd11 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.996 1+0 records in 00:17:34.996 1+0 records out 00:17:34.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012321 s, 3.3 MB/s 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:34.996 04:39:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:17:34.996 /dev/nbd12 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.270 1+0 records in 00:17:35.270 1+0 records out 00:17:35.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127375 s, 3.2 MB/s 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:17:35.270 /dev/nbd13 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.270 1+0 records in 00:17:35.270 1+0 records out 00:17:35.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739491 s, 5.5 MB/s 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:35.270 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:35.531 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd0", 00:17:35.531 "bdev_name": "Nvme0n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd1", 00:17:35.531 "bdev_name": "Nvme1n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd10", 00:17:35.531 "bdev_name": "Nvme2n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd11", 00:17:35.531 "bdev_name": "Nvme2n2" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd12", 00:17:35.531 "bdev_name": "Nvme2n3" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd13", 00:17:35.531 "bdev_name": "Nvme3n1" 00:17:35.531 } 00:17:35.531 ]' 00:17:35.531 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:35.531 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd0", 00:17:35.531 "bdev_name": "Nvme0n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd1", 00:17:35.531 "bdev_name": "Nvme1n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd10", 00:17:35.531 "bdev_name": "Nvme2n1" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd11", 00:17:35.531 "bdev_name": "Nvme2n2" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd12", 00:17:35.531 "bdev_name": "Nvme2n3" 00:17:35.531 }, 00:17:35.531 { 00:17:35.531 "nbd_device": "/dev/nbd13", 00:17:35.532 "bdev_name": "Nvme3n1" 00:17:35.532 } 00:17:35.532 ]' 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:35.532 /dev/nbd1 00:17:35.532 /dev/nbd10 00:17:35.532 /dev/nbd11 00:17:35.532 /dev/nbd12 00:17:35.532 /dev/nbd13' 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:35.532 /dev/nbd1 00:17:35.532 /dev/nbd10 00:17:35.532 /dev/nbd11 00:17:35.532 /dev/nbd12 00:17:35.532 /dev/nbd13' 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:35.532 256+0 records in 00:17:35.532 256+0 records out 00:17:35.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557049 s, 188 MB/s 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:35.532 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:35.793 256+0 records in 00:17:35.793 256+0 records out 00:17:35.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132347 s, 7.9 MB/s 00:17:35.793 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:35.793 04:39:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:36.053 256+0 records in 00:17:36.053 256+0 records out 00:17:36.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232852 s, 4.5 MB/s 00:17:36.053 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:36.053 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:36.312 256+0 records in 00:17:36.312 256+0 records out 00:17:36.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20323 s, 5.2 MB/s 00:17:36.312 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:36.312 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:36.312 256+0 records in 00:17:36.312 256+0 records out 00:17:36.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201987 s, 5.2 MB/s 00:17:36.312 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:36.312 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:36.572 256+0 records in 00:17:36.572 256+0 records out 00:17:36.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259263 s, 4.0 MB/s 00:17:36.572 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:36.572 04:39:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:37.142 256+0 records in 00:17:37.142 256+0 records out 00:17:37.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.272579 s, 3.8 MB/s 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.142 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:37.402 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:37.402 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:37.402 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:37.402 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.403 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.663 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.924 04:39:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:38.186 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.187 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:38.448 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:38.708 malloc_lvol_verify 00:17:38.708 04:39:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:39.039 53b11ee3-fb39-4eae-b578-38eec5cf9bd9 00:17:39.039 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:39.300 a6399be0-a02c-4194-85de-9c3edd2a78a5 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:39.300 /dev/nbd0 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:39.300 mke2fs 1.47.0 (5-Feb-2023) 00:17:39.300 Discarding device blocks: 0/4096 done 00:17:39.300 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:39.300 00:17:39.300 Allocating group tables: 0/1 done 00:17:39.300 Writing inode tables: 0/1 done 00:17:39.300 Creating journal (1024 blocks): done 00:17:39.300 Writing superblocks and filesystem accounting information: 0/1 done 00:17:39.300 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:39.300 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:39.301 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.301 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60054 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60054 ']' 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60054 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60054 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.565 killing process with pid 60054 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60054' 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60054 00:17:39.565 04:39:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60054 00:17:40.499 ************************************ 00:17:40.499 END TEST bdev_nbd 00:17:40.499 ************************************ 00:17:40.499 04:39:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:40.499 00:17:40.499 real 0m10.600s 00:17:40.499 user 0m14.521s 00:17:40.499 sys 0m3.432s 00:17:40.499 04:39:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.499 04:39:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:40.499 skipping fio tests on NVMe due to multi-ns failures. 00:17:40.499 04:39:47 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:40.499 04:39:47 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:17:40.499 04:39:47 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:17:40.499 04:39:47 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:40.499 04:39:47 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:40.499 04:39:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:40.499 04:39:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.499 04:39:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.499 ************************************ 00:17:40.499 START TEST bdev_verify 00:17:40.499 ************************************ 00:17:40.499 04:39:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:40.499 [2024-11-27 04:39:47.625133] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:40.499 [2024-11-27 04:39:47.625382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:17:40.755 [2024-11-27 04:39:47.783608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.755 [2024-11-27 04:39:47.884527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.755 [2024-11-27 04:39:47.884732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.322 Running I/O for 5 seconds... 00:17:43.644 20608.00 IOPS, 80.50 MiB/s [2024-11-27T04:39:51.787Z] 19648.00 IOPS, 76.75 MiB/s [2024-11-27T04:39:52.730Z] 19157.33 IOPS, 74.83 MiB/s [2024-11-27T04:39:53.756Z] 18880.00 IOPS, 73.75 MiB/s [2024-11-27T04:39:53.756Z] 18521.60 IOPS, 72.35 MiB/s 00:17:46.553 Latency(us) 00:17:46.553 [2024-11-27T04:39:53.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.553 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0xbd0bd 00:17:46.553 Nvme0n1 : 5.05 1496.88 5.85 0.00 0.00 85191.45 15526.99 77836.60 00:17:46.553 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:46.553 Nvme0n1 : 5.05 1546.42 6.04 0.00 0.00 82469.03 16232.76 72997.02 00:17:46.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0xa0000 00:17:46.553 Nvme1n1 : 5.05 1496.47 5.85 0.00 0.00 85115.31 16837.71 75820.11 00:17:46.553 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0xa0000 length 0xa0000 00:17:46.553 Nvme1n1 : 5.05 1545.99 6.04 0.00 0.00 82320.21 18753.38 67754.14 00:17:46.553 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0x80000 00:17:46.553 Nvme2n1 : 5.07 1503.65 5.87 0.00 0.00 84529.06 7763.50 75820.11 00:17:46.553 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x80000 length 0x80000 00:17:46.553 Nvme2n1 : 5.07 1553.62 6.07 0.00 0.00 81802.28 7461.02 66544.25 00:17:46.553 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0x80000 00:17:46.553 Nvme2n2 : 5.07 1503.24 5.87 0.00 0.00 84395.17 7763.50 70980.53 00:17:46.553 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x80000 length 0x80000 00:17:46.553 Nvme2n2 : 5.07 1552.83 6.07 0.00 0.00 81675.58 8822.15 66544.25 00:17:46.553 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0x80000 00:17:46.553 Nvme2n3 : 5.08 1511.24 5.90 0.00 0.00 83937.65 11947.72 73803.62 00:17:46.553 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x80000 length 0x80000 00:17:46.553 Nvme2n3 : 5.07 1552.10 6.06 0.00 0.00 81544.56 9527.93 70173.93 00:17:46.553 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.553 Verification LBA range: start 0x0 length 0x20000 00:17:46.553 Nvme3n1 : 5.08 1510.82 5.90 0.00 0.00 83811.07 8318.03 77433.30 00:17:46.554 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.554 Verification LBA range: start 0x20000 length 0x20000 00:17:46.554 Nvme3n1 : 5.08 1561.16 6.10 0.00 0.00 81048.67 9124.63 72190.42 00:17:46.554 [2024-11-27T04:39:53.757Z] =================================================================================================================== 00:17:46.554 [2024-11-27T04:39:53.757Z] Total : 18334.42 71.62 0.00 0.00 83128.95 7461.02 77836.60 00:17:47.498 00:17:47.498 real 0m7.123s 00:17:47.498 user 0m13.297s 00:17:47.498 sys 0m0.221s 00:17:47.498 04:39:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.498 ************************************ 00:17:47.498 END TEST bdev_verify 00:17:47.498 ************************************ 00:17:47.498 04:39:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:47.759 04:39:54 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:47.759 04:39:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:47.759 04:39:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.759 04:39:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:47.759 ************************************ 00:17:47.759 START TEST bdev_verify_big_io 00:17:47.759 ************************************ 00:17:47.759 04:39:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:47.759 [2024-11-27 04:39:54.826122] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:47.759 [2024-11-27 04:39:54.826239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:17:48.022 [2024-11-27 04:39:54.988772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:48.022 [2024-11-27 04:39:55.093872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.022 [2024-11-27 04:39:55.094022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.594 Running I/O for 5 seconds... 00:17:52.083 634.00 IOPS, 39.62 MiB/s [2024-11-27T04:40:01.958Z] 1474.50 IOPS, 92.16 MiB/s [2024-11-27T04:40:01.958Z] 1720.00 IOPS, 107.50 MiB/s [2024-11-27T04:40:01.958Z] 2172.50 IOPS, 135.78 MiB/s 00:17:54.755 Latency(us) 00:17:54.755 [2024-11-27T04:40:01.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0xbd0b 00:17:54.755 Nvme0n1 : 5.86 116.04 7.25 0.00 0.00 1057812.28 17745.13 1051802.39 00:17:54.755 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:54.755 Nvme0n1 : 5.66 108.79 6.80 0.00 0.00 1123183.60 31255.63 1645457.72 00:17:54.755 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0xa000 00:17:54.755 Nvme1n1 : 5.81 114.68 7.17 0.00 0.00 1034686.12 68560.74 877577.45 00:17:54.755 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0xa000 length 0xa000 00:17:54.755 Nvme1n1 : 5.87 113.01 7.06 0.00 0.00 1045616.81 50210.66 1677721.60 00:17:54.755 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0x8000 00:17:54.755 Nvme2n1 : 5.86 120.09 7.51 0.00 0.00 972366.91 51622.20 858219.13 00:17:54.755 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x8000 length 0x8000 00:17:54.755 Nvme2n1 : 5.90 117.83 7.36 0.00 0.00 982047.13 65334.35 1703532.70 00:17:54.755 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0x8000 00:17:54.755 Nvme2n2 : 5.89 124.90 7.81 0.00 0.00 912261.12 27424.30 903388.55 00:17:54.755 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x8000 length 0x8000 00:17:54.755 Nvme2n2 : 5.90 120.34 7.52 0.00 0.00 932078.98 29037.49 1729343.80 00:17:54.755 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0x8000 00:17:54.755 Nvme2n3 : 5.92 126.22 7.89 0.00 0.00 871590.47 28634.19 929199.66 00:17:54.755 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x8000 length 0x8000 00:17:54.755 Nvme2n3 : 5.95 132.48 8.28 0.00 0.00 815858.46 22181.42 1303460.63 00:17:54.755 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x0 length 0x2000 00:17:54.755 Nvme3n1 : 5.93 140.21 8.76 0.00 0.00 763145.09 7763.50 1142141.24 00:17:54.755 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:54.755 Verification LBA range: start 0x2000 length 0x2000 00:17:54.756 Nvme3n1 : 6.01 169.73 10.61 0.00 0.00 621886.59 696.32 1342177.28 00:17:54.756 [2024-11-27T04:40:01.959Z] =================================================================================================================== 00:17:54.756 [2024-11-27T04:40:01.959Z] Total : 1504.32 94.02 0.00 0.00 909680.84 696.32 1729343.80 00:17:56.668 00:17:56.668 real 0m8.600s 00:17:56.668 user 0m16.187s 00:17:56.668 sys 0m0.249s 00:17:56.668 ************************************ 00:17:56.668 END TEST bdev_verify_big_io 00:17:56.668 ************************************ 00:17:56.668 04:40:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.668 04:40:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.668 04:40:03 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.668 04:40:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:56.668 04:40:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.668 04:40:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.668 ************************************ 00:17:56.668 START TEST bdev_write_zeroes 00:17:56.668 ************************************ 00:17:56.668 04:40:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.668 [2024-11-27 04:40:03.496823] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:56.668 [2024-11-27 04:40:03.496948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60642 ] 00:17:56.668 [2024-11-27 04:40:03.654702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.668 [2024-11-27 04:40:03.758197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.237 Running I/O for 1 seconds... 00:17:58.179 46784.00 IOPS, 182.75 MiB/s 00:17:58.179 Latency(us) 00:17:58.179 [2024-11-27T04:40:05.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.179 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme0n1 : 1.02 7832.92 30.60 0.00 0.00 16305.73 10183.29 24500.38 00:17:58.179 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme1n1 : 1.02 7823.39 30.56 0.00 0.00 16306.18 11191.53 25004.50 00:17:58.179 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme2n1 : 1.02 7813.94 30.52 0.00 0.00 16255.04 11090.71 22282.24 00:17:58.179 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme2n2 : 1.03 7804.58 30.49 0.00 0.00 16223.29 11040.30 22383.06 00:17:58.179 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme2n3 : 1.03 7795.61 30.45 0.00 0.00 16190.72 7259.37 22988.01 00:17:58.179 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.179 Nvme3n1 : 1.03 7724.48 30.17 0.00 0.00 16301.21 9074.22 28634.19 00:17:58.179 [2024-11-27T04:40:05.382Z] =================================================================================================================== 00:17:58.179 [2024-11-27T04:40:05.382Z] Total : 46794.91 182.79 0.00 0.00 16263.64 7259.37 28634.19 00:17:59.122 00:17:59.122 real 0m2.698s 00:17:59.122 user 0m2.391s 00:17:59.122 sys 0m0.185s 00:17:59.122 04:40:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.122 ************************************ 00:17:59.122 END TEST bdev_write_zeroes 00:17:59.122 ************************************ 00:17:59.122 04:40:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:59.122 04:40:06 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.122 04:40:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:59.122 04:40:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.122 04:40:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.122 ************************************ 00:17:59.122 START TEST bdev_json_nonenclosed 00:17:59.122 ************************************ 00:17:59.123 04:40:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.123 [2024-11-27 04:40:06.259605] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:59.123 [2024-11-27 04:40:06.259862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60695 ] 00:17:59.383 [2024-11-27 04:40:06.417561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.383 [2024-11-27 04:40:06.520769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.383 [2024-11-27 04:40:06.520992] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:59.383 [2024-11-27 04:40:06.521087] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.383 [2024-11-27 04:40:06.521112] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:59.644 00:17:59.644 real 0m0.510s 00:17:59.644 user 0m0.309s 00:17:59.644 sys 0m0.096s 00:17:59.644 04:40:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.644 ************************************ 00:17:59.644 END TEST bdev_json_nonenclosed 00:17:59.644 ************************************ 00:17:59.644 04:40:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:59.644 04:40:06 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.644 04:40:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:59.644 04:40:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.644 04:40:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.644 ************************************ 00:17:59.644 START TEST bdev_json_nonarray 00:17:59.644 ************************************ 00:17:59.644 04:40:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.644 [2024-11-27 04:40:06.833683] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:17:59.644 [2024-11-27 04:40:06.833808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:17:59.905 [2024-11-27 04:40:06.993263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.905 [2024-11-27 04:40:07.095671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.905 [2024-11-27 04:40:07.095769] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:59.905 [2024-11-27 04:40:07.095786] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.905 [2024-11-27 04:40:07.095795] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.167 ************************************ 00:18:00.167 END TEST bdev_json_nonarray 00:18:00.167 ************************************ 00:18:00.167 00:18:00.167 real 0m0.507s 00:18:00.167 user 0m0.305s 00:18:00.167 sys 0m0.096s 00:18:00.167 04:40:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.167 04:40:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:18:00.167 04:40:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:18:00.167 00:18:00.167 real 0m37.968s 00:18:00.167 user 0m58.432s 00:18:00.167 sys 0m5.469s 00:18:00.167 ************************************ 00:18:00.167 END TEST blockdev_nvme 00:18:00.167 ************************************ 00:18:00.167 04:40:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.167 04:40:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:00.430 04:40:07 -- spdk/autotest.sh@209 -- # uname -s 00:18:00.430 04:40:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:18:00.430 04:40:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:18:00.430 04:40:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.430 04:40:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.430 04:40:07 -- common/autotest_common.sh@10 -- # set +x 00:18:00.430 ************************************ 00:18:00.430 START TEST blockdev_nvme_gpt 00:18:00.430 ************************************ 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:18:00.430 * Looking for test storage... 00:18:00.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.430 04:40:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.430 --rc genhtml_branch_coverage=1 00:18:00.430 --rc genhtml_function_coverage=1 00:18:00.430 --rc genhtml_legend=1 00:18:00.430 --rc geninfo_all_blocks=1 00:18:00.430 --rc geninfo_unexecuted_blocks=1 00:18:00.430 00:18:00.430 ' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.430 --rc genhtml_branch_coverage=1 00:18:00.430 --rc genhtml_function_coverage=1 00:18:00.430 --rc genhtml_legend=1 00:18:00.430 --rc geninfo_all_blocks=1 00:18:00.430 --rc geninfo_unexecuted_blocks=1 00:18:00.430 00:18:00.430 ' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.430 --rc genhtml_branch_coverage=1 00:18:00.430 --rc genhtml_function_coverage=1 00:18:00.430 --rc genhtml_legend=1 00:18:00.430 --rc geninfo_all_blocks=1 00:18:00.430 --rc geninfo_unexecuted_blocks=1 00:18:00.430 00:18:00.430 ' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.430 --rc genhtml_branch_coverage=1 00:18:00.430 --rc genhtml_function_coverage=1 00:18:00.430 --rc genhtml_legend=1 00:18:00.430 --rc geninfo_all_blocks=1 00:18:00.430 --rc geninfo_unexecuted_blocks=1 00:18:00.430 00:18:00.430 ' 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:00.430 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60799 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60799 00:18:00.431 04:40:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60799 ']' 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.431 04:40:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:00.703 [2024-11-27 04:40:07.653508] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:00.703 [2024-11-27 04:40:07.653809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60799 ] 00:18:00.703 [2024-11-27 04:40:07.811882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.985 [2024-11-27 04:40:07.924059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.557 04:40:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.557 04:40:08 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:18:01.557 04:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:01.557 04:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:18:01.557 04:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:01.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.817 Waiting for block devices as requested 00:18:02.079 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:02.079 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:02.079 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:02.384 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:07.690 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:07.690 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:18:07.690 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:18:07.691 BYT; 00:18:07.691 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:18:07.691 BYT; 00:18:07.691 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:07.691 04:40:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:07.691 04:40:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:18:08.631 The operation has completed successfully. 00:18:08.631 04:40:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:18:09.570 The operation has completed successfully. 00:18:09.570 04:40:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:09.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:10.407 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:10.670 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:10.670 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:10.670 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:18:10.670 04:40:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.670 04:40:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:10.670 [] 00:18:10.670 04:40:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:10.670 04:40:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:18:10.670 04:40:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.670 04:40:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.931 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.931 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:18:10.931 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.931 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:11.193 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:11.193 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:11.194 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "109f1097-064d-4169-8662-752a8aa025a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "109f1097-064d-4169-8662-752a8aa025a4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fded87e7-f904-4fa6-a187-d2c698182e74"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fded87e7-f904-4fa6-a187-d2c698182e74",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d02cf72e-0587-401f-b567-7a473b5d9823"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d02cf72e-0587-401f-b567-7a473b5d9823",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8ae9e3ce-adc3-4c8a-8571-da387192a619"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ae9e3ce-adc3-4c8a-8571-da387192a619",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7058306f-1d63-4595-a9da-64318ea112cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7058306f-1d63-4595-a9da-64318ea112cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:11.194 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:11.194 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:18:11.194 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:11.194 04:40:18 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60799 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60799 ']' 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60799 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60799 00:18:11.194 killing process with pid 60799 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60799' 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60799 00:18:11.194 04:40:18 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60799 00:18:13.110 04:40:19 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:13.110 04:40:19 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:13.110 04:40:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:13.110 04:40:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.110 04:40:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:13.110 ************************************ 00:18:13.110 START TEST bdev_hello_world 00:18:13.110 ************************************ 00:18:13.110 04:40:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:13.110 [2024-11-27 04:40:19.902777] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:13.110 [2024-11-27 04:40:19.902902] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61429 ] 00:18:13.110 [2024-11-27 04:40:20.063549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.110 [2024-11-27 04:40:20.171625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.681 [2024-11-27 04:40:20.722759] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:13.681 [2024-11-27 04:40:20.722959] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:13.681 [2024-11-27 04:40:20.722990] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:13.681 [2024-11-27 04:40:20.725417] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:13.681 [2024-11-27 04:40:20.726248] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:13.681 [2024-11-27 04:40:20.726275] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:13.681 [2024-11-27 04:40:20.726630] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:13.681 00:18:13.681 [2024-11-27 04:40:20.726647] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:14.621 ************************************ 00:18:14.621 END TEST bdev_hello_world 00:18:14.621 ************************************ 00:18:14.621 00:18:14.621 real 0m1.653s 00:18:14.621 user 0m1.369s 00:18:14.621 sys 0m0.174s 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:14.621 04:40:21 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:14.621 04:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.621 04:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.621 04:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:14.621 ************************************ 00:18:14.621 START TEST bdev_bounds 00:18:14.621 ************************************ 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61471 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61471' 00:18:14.621 Process bdevio pid: 61471 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61471 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61471 ']' 00:18:14.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.621 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.622 04:40:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:14.622 [2024-11-27 04:40:21.631278] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:14.622 [2024-11-27 04:40:21.631401] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:18:14.622 [2024-11-27 04:40:21.793175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:14.881 [2024-11-27 04:40:21.906496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.881 [2024-11-27 04:40:21.906813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.881 [2024-11-27 04:40:21.906830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.451 04:40:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.451 04:40:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:15.451 04:40:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:15.451 I/O targets: 00:18:15.451 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:15.451 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:18:15.451 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:18:15.451 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:15.451 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:15.451 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:15.451 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:15.451 00:18:15.451 00:18:15.451 CUnit - A unit testing framework for C - Version 2.1-3 00:18:15.451 http://cunit.sourceforge.net/ 00:18:15.451 00:18:15.451 00:18:15.451 Suite: bdevio tests on: Nvme3n1 00:18:15.451 Test: blockdev write read block ...passed 00:18:15.451 Test: blockdev write zeroes read block ...passed 00:18:15.451 Test: blockdev write zeroes read no split ...passed 00:18:15.451 Test: blockdev write zeroes read split ...passed 00:18:15.451 Test: blockdev write zeroes read split partial ...passed 00:18:15.451 Test: blockdev reset ...[2024-11-27 04:40:22.630395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:18:15.451 passed 00:18:15.451 Test: blockdev write read 8 blocks ...[2024-11-27 04:40:22.633966] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:18:15.451 passed 00:18:15.451 Test: blockdev write read size > 128k ...passed 00:18:15.451 Test: blockdev write read invalid size ...passed 00:18:15.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.451 Test: blockdev write read max offset ...passed 00:18:15.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.451 Test: blockdev writev readv 8 blocks ...passed 00:18:15.451 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.451 Test: blockdev writev readv block ...passed 00:18:15.451 Test: blockdev writev readv size > 128k ...passed 00:18:15.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.713 Test: blockdev comparev and writev ...[2024-11-27 04:40:22.656093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba004000 len:0x1000 00:18:15.713 [2024-11-27 04:40:22.656142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.713 passed 00:18:15.713 Test: blockdev nvme passthru rw ...passed 00:18:15.713 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.713 Test: blockdev nvme admin passthru ...[2024-11-27 04:40:22.658797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:15.713 [2024-11-27 04:40:22.658833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:15.713 passed 00:18:15.713 Test: blockdev copy ...passed 00:18:15.713 Suite: bdevio tests on: Nvme2n3 00:18:15.713 Test: blockdev write read block ...passed 00:18:15.713 Test: blockdev write zeroes read block ...passed 00:18:15.713 Test: blockdev write zeroes read no split ...passed 00:18:15.713 Test: blockdev write zeroes read split ...passed 00:18:15.713 Test: blockdev write zeroes read split partial ...passed 00:18:15.713 Test: blockdev reset ...[2024-11-27 04:40:22.719534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:15.713 [2024-11-27 04:40:22.726309] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:18:15.713 Test: blockdev write read 8 blocks ...uccessful. 00:18:15.713 passed 00:18:15.713 Test: blockdev write read size > 128k ...passed 00:18:15.713 Test: blockdev write read invalid size ...passed 00:18:15.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.713 Test: blockdev write read max offset ...passed 00:18:15.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.713 Test: blockdev writev readv 8 blocks ...passed 00:18:15.713 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.713 Test: blockdev writev readv block ...passed 00:18:15.713 Test: blockdev writev readv size > 128k ...passed 00:18:15.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.713 Test: blockdev comparev and writev ...[2024-11-27 04:40:22.745327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba002000 len:0x1000 00:18:15.713 [2024-11-27 04:40:22.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.713 passed 00:18:15.713 Test: blockdev nvme passthru rw ...passed 00:18:15.713 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.713 Test: blockdev nvme admin passthru ...[2024-11-27 04:40:22.748090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:15.713 [2024-11-27 04:40:22.748127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:15.713 passed 00:18:15.713 Test: blockdev copy ...passed 00:18:15.713 Suite: bdevio tests on: Nvme2n2 00:18:15.714 Test: blockdev write read block ...passed 00:18:15.714 Test: blockdev write zeroes read block ...passed 00:18:15.714 Test: blockdev write zeroes read no split ...passed 00:18:15.714 Test: blockdev write zeroes read split ...passed 00:18:15.714 Test: blockdev write zeroes read split partial ...passed 00:18:15.714 Test: blockdev reset ...[2024-11-27 04:40:22.802294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:15.714 [2024-11-27 04:40:22.806930] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:18:15.714 Test: blockdev write read 8 blocks ...uccessful. 00:18:15.714 passed 00:18:15.714 Test: blockdev write read size > 128k ...passed 00:18:15.714 Test: blockdev write read invalid size ...passed 00:18:15.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.714 Test: blockdev write read max offset ...passed 00:18:15.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.714 Test: blockdev writev readv 8 blocks ...passed 00:18:15.714 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.714 Test: blockdev writev readv block ...passed 00:18:15.714 Test: blockdev writev readv size > 128k ...passed 00:18:15.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.714 Test: blockdev comparev and writev ...[2024-11-27 04:40:22.819217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc638000 len:0x1000 00:18:15.714 [2024-11-27 04:40:22.819256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.714 passed 00:18:15.714 Test: blockdev nvme passthru rw ...passed 00:18:15.714 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:40:22.820738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:15.714 [2024-11-27 04:40:22.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:15.714 passed 00:18:15.714 Test: blockdev nvme admin passthru ...passed 00:18:15.714 Test: blockdev copy ...passed 00:18:15.714 Suite: bdevio tests on: Nvme2n1 00:18:15.714 Test: blockdev write read block ...passed 00:18:15.714 Test: blockdev write zeroes read block ...passed 00:18:15.714 Test: blockdev write zeroes read no split ...passed 00:18:15.714 Test: blockdev write zeroes read split ...passed 00:18:15.714 Test: blockdev write zeroes read split partial ...passed 00:18:15.714 Test: blockdev reset ...[2024-11-27 04:40:22.872820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:15.714 [2024-11-27 04:40:22.875906] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:18:15.714 Test: blockdev write read 8 blocks ...uccessful. 00:18:15.714 passed 00:18:15.714 Test: blockdev write read size > 128k ...passed 00:18:15.714 Test: blockdev write read invalid size ...passed 00:18:15.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.714 Test: blockdev write read max offset ...passed 00:18:15.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.714 Test: blockdev writev readv 8 blocks ...passed 00:18:15.714 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.714 Test: blockdev writev readv block ...passed 00:18:15.714 Test: blockdev writev readv size > 128k ...passed 00:18:15.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.714 Test: blockdev comparev and writev ...[2024-11-27 04:40:22.894799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc634000 len:0x1000 00:18:15.714 [2024-11-27 04:40:22.894937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.714 passed 00:18:15.714 Test: blockdev nvme passthru rw ...passed 00:18:15.714 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.714 Test: blockdev nvme admin passthru ...[2024-11-27 04:40:22.897610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:15.714 [2024-11-27 04:40:22.897643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:15.714 passed 00:18:15.714 Test: blockdev copy ...passed 00:18:15.714 Suite: bdevio tests on: Nvme1n1p2 00:18:15.714 Test: blockdev write read block ...passed 00:18:15.714 Test: blockdev write zeroes read block ...passed 00:18:15.974 Test: blockdev write zeroes read no split ...passed 00:18:15.974 Test: blockdev write zeroes read split ...passed 00:18:15.974 Test: blockdev write zeroes read split partial ...passed 00:18:15.974 Test: blockdev reset ...[2024-11-27 04:40:22.956798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:18:15.974 [2024-11-27 04:40:22.960838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:18:15.974 Test: blockdev write read 8 blocks ...uccessful. 00:18:15.974 passed 00:18:15.974 Test: blockdev write read size > 128k ...passed 00:18:15.974 Test: blockdev write read invalid size ...passed 00:18:15.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.974 Test: blockdev write read max offset ...passed 00:18:15.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.974 Test: blockdev writev readv 8 blocks ...passed 00:18:15.974 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.974 Test: blockdev writev readv block ...passed 00:18:15.974 Test: blockdev writev readv size > 128k ...passed 00:18:15.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.974 Test: blockdev comparev and writev ...[2024-11-27 04:40:22.982259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cc630000 len:0x1000 00:18:15.974 [2024-11-27 04:40:22.982368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.974 passed 00:18:15.974 Test: blockdev nvme passthru rw ...passed 00:18:15.974 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.974 Test: blockdev nvme admin passthru ...passed 00:18:15.974 Test: blockdev copy ...passed 00:18:15.974 Suite: bdevio tests on: Nvme1n1p1 00:18:15.974 Test: blockdev write read block ...passed 00:18:15.974 Test: blockdev write zeroes read block ...passed 00:18:15.975 Test: blockdev write zeroes read no split ...passed 00:18:15.975 Test: blockdev write zeroes read split ...passed 00:18:15.975 Test: blockdev write zeroes read split partial ...passed 00:18:15.975 Test: blockdev reset ...[2024-11-27 04:40:23.035114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:18:15.975 [2024-11-27 04:40:23.038004] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:18:15.975 passed 00:18:15.975 Test: blockdev write read 8 blocks ...passed 00:18:15.975 Test: blockdev write read size > 128k ...passed 00:18:15.975 Test: blockdev write read invalid size ...passed 00:18:15.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.975 Test: blockdev write read max offset ...passed 00:18:15.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.975 Test: blockdev writev readv 8 blocks ...passed 00:18:15.975 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.975 Test: blockdev writev readv block ...passed 00:18:15.975 Test: blockdev writev readv size > 128k ...passed 00:18:15.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.975 Test: blockdev comparev and writev ...[2024-11-27 04:40:23.050933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2baa0e000 len:0x1000 00:18:15.975 [2024-11-27 04:40:23.050969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:15.975 passed 00:18:15.975 Test: blockdev nvme passthru rw ...passed 00:18:15.975 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.975 Test: blockdev nvme admin passthru ...passed 00:18:15.975 Test: blockdev copy ...passed 00:18:15.975 Suite: bdevio tests on: Nvme0n1 00:18:15.975 Test: blockdev write read block ...passed 00:18:15.975 Test: blockdev write zeroes read block ...passed 00:18:15.975 Test: blockdev write zeroes read no split ...passed 00:18:15.975 Test: blockdev write zeroes read split ...passed 00:18:15.975 Test: blockdev write zeroes read split partial ...passed 00:18:15.975 Test: blockdev reset ...[2024-11-27 04:40:23.099802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:18:15.975 [2024-11-27 04:40:23.102601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:18:15.975 Test: blockdev write read 8 blocks ...uccessful. 00:18:15.975 passed 00:18:15.975 Test: blockdev write read size > 128k ...passed 00:18:15.975 Test: blockdev write read invalid size ...passed 00:18:15.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.975 Test: blockdev write read max offset ...passed 00:18:15.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.975 Test: blockdev writev readv 8 blocks ...passed 00:18:15.975 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.975 Test: blockdev writev readv block ...passed 00:18:15.975 Test: blockdev writev readv size > 128k ...passed 00:18:15.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.975 Test: blockdev comparev and writev ...passed 00:18:15.975 Test: blockdev nvme passthru rw ...[2024-11-27 04:40:23.114829] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:18:15.975 separate metadata which is not supported yet. 00:18:15.975 passed 00:18:15.975 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.975 Test: blockdev nvme admin passthru ...[2024-11-27 04:40:23.115524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:18:15.975 [2024-11-27 04:40:23.115564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:18:15.975 passed 00:18:15.975 Test: blockdev copy ...passed 00:18:15.975 00:18:15.975 Run Summary: Type Total Ran Passed Failed Inactive 00:18:15.975 suites 7 7 n/a 0 0 00:18:15.975 tests 161 161 161 0 0 00:18:15.975 asserts 1025 1025 1025 0 n/a 00:18:15.975 00:18:15.975 Elapsed time = 1.373 seconds 00:18:15.975 0 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61471 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61471 ']' 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61471 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61471 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61471' 00:18:15.975 killing process with pid 61471 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61471 00:18:15.975 04:40:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61471 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:16.917 00:18:16.917 real 0m2.445s 00:18:16.917 user 0m6.193s 00:18:16.917 sys 0m0.306s 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.917 ************************************ 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:16.917 END TEST bdev_bounds 00:18:16.917 ************************************ 00:18:16.917 04:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:16.917 04:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:16.917 04:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.917 04:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:16.917 ************************************ 00:18:16.917 START TEST bdev_nbd 00:18:16.917 ************************************ 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61531 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61531 /var/tmp/spdk-nbd.sock 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61531 ']' 00:18:16.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.917 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:17.178 [2024-11-27 04:40:24.144224] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:17.178 [2024-11-27 04:40:24.144343] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.178 [2024-11-27 04:40:24.301921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.438 [2024-11-27 04:40:24.406381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.009 04:40:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:18:18.009 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:18.009 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:18.278 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:18.278 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.279 1+0 records in 00:18:18.279 1+0 records out 00:18:18.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123933 s, 3.3 MB/s 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.279 1+0 records in 00:18:18.279 1+0 records out 00:18:18.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916534 s, 4.5 MB/s 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.279 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.541 1+0 records in 00:18:18.541 1+0 records out 00:18:18.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907943 s, 4.5 MB/s 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.541 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.802 1+0 records in 00:18:18.802 1+0 records out 00:18:18.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101766 s, 4.0 MB/s 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.802 04:40:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.062 1+0 records in 00:18:19.062 1+0 records out 00:18:19.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107789 s, 3.8 MB/s 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:19.062 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.324 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.325 1+0 records in 00:18:19.325 1+0 records out 00:18:19.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114883 s, 3.6 MB/s 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:19.325 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:19.586 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.587 1+0 records in 00:18:19.587 1+0 records out 00:18:19.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136899 s, 3.0 MB/s 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:19.587 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd0", 00:18:19.849 "bdev_name": "Nvme0n1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd1", 00:18:19.849 "bdev_name": "Nvme1n1p1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd2", 00:18:19.849 "bdev_name": "Nvme1n1p2" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd3", 00:18:19.849 "bdev_name": "Nvme2n1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd4", 00:18:19.849 "bdev_name": "Nvme2n2" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd5", 00:18:19.849 "bdev_name": "Nvme2n3" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd6", 00:18:19.849 "bdev_name": "Nvme3n1" 00:18:19.849 } 00:18:19.849 ]' 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd0", 00:18:19.849 "bdev_name": "Nvme0n1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd1", 00:18:19.849 "bdev_name": "Nvme1n1p1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd2", 00:18:19.849 "bdev_name": "Nvme1n1p2" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd3", 00:18:19.849 "bdev_name": "Nvme2n1" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd4", 00:18:19.849 "bdev_name": "Nvme2n2" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd5", 00:18:19.849 "bdev_name": "Nvme2n3" 00:18:19.849 }, 00:18:19.849 { 00:18:19.849 "nbd_device": "/dev/nbd6", 00:18:19.849 "bdev_name": "Nvme3n1" 00:18:19.849 } 00:18:19.849 ]' 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.849 04:40:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.124 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.386 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.646 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.907 04:40:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:20.907 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.907 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.907 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.907 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.167 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.429 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:21.690 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:21.691 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:18:21.953 /dev/nbd0 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.953 1+0 records in 00:18:21.953 1+0 records out 00:18:21.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813163 s, 5.0 MB/s 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:21.953 04:40:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:18:21.953 /dev/nbd1 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.213 1+0 records in 00:18:22.213 1+0 records out 00:18:22.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851813 s, 4.8 MB/s 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:18:22.213 /dev/nbd10 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.213 1+0 records in 00:18:22.213 1+0 records out 00:18:22.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758851 s, 5.4 MB/s 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.213 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:18:22.474 /dev/nbd11 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.474 1+0 records in 00:18:22.474 1+0 records out 00:18:22.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987997 s, 4.1 MB/s 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.474 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:18:22.734 /dev/nbd12 00:18:22.734 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.735 1+0 records in 00:18:22.735 1+0 records out 00:18:22.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128987 s, 3.2 MB/s 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.735 04:40:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:18:22.996 /dev/nbd13 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.996 1+0 records in 00:18:22.996 1+0 records out 00:18:22.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124339 s, 3.3 MB/s 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.996 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:18:23.257 /dev/nbd14 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.257 1+0 records in 00:18:23.257 1+0 records out 00:18:23.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112487 s, 3.6 MB/s 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:23.257 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd0", 00:18:23.518 "bdev_name": "Nvme0n1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd1", 00:18:23.518 "bdev_name": "Nvme1n1p1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd10", 00:18:23.518 "bdev_name": "Nvme1n1p2" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd11", 00:18:23.518 "bdev_name": "Nvme2n1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd12", 00:18:23.518 "bdev_name": "Nvme2n2" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd13", 00:18:23.518 "bdev_name": "Nvme2n3" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd14", 00:18:23.518 "bdev_name": "Nvme3n1" 00:18:23.518 } 00:18:23.518 ]' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd0", 00:18:23.518 "bdev_name": "Nvme0n1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd1", 00:18:23.518 "bdev_name": "Nvme1n1p1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd10", 00:18:23.518 "bdev_name": "Nvme1n1p2" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd11", 00:18:23.518 "bdev_name": "Nvme2n1" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd12", 00:18:23.518 "bdev_name": "Nvme2n2" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd13", 00:18:23.518 "bdev_name": "Nvme2n3" 00:18:23.518 }, 00:18:23.518 { 00:18:23.518 "nbd_device": "/dev/nbd14", 00:18:23.518 "bdev_name": "Nvme3n1" 00:18:23.518 } 00:18:23.518 ]' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:23.518 /dev/nbd1 00:18:23.518 /dev/nbd10 00:18:23.518 /dev/nbd11 00:18:23.518 /dev/nbd12 00:18:23.518 /dev/nbd13 00:18:23.518 /dev/nbd14' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:23.518 /dev/nbd1 00:18:23.518 /dev/nbd10 00:18:23.518 /dev/nbd11 00:18:23.518 /dev/nbd12 00:18:23.518 /dev/nbd13 00:18:23.518 /dev/nbd14' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:23.518 256+0 records in 00:18:23.518 256+0 records out 00:18:23.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622428 s, 168 MB/s 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:23.518 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:23.780 256+0 records in 00:18:23.780 256+0 records out 00:18:23.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251495 s, 4.2 MB/s 00:18:23.780 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:23.780 04:40:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:24.042 256+0 records in 00:18:24.042 256+0 records out 00:18:24.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232248 s, 4.5 MB/s 00:18:24.042 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.042 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:24.301 256+0 records in 00:18:24.301 256+0 records out 00:18:24.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2705 s, 3.9 MB/s 00:18:24.301 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.301 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:24.562 256+0 records in 00:18:24.562 256+0 records out 00:18:24.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.262808 s, 4.0 MB/s 00:18:24.562 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.562 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:24.824 256+0 records in 00:18:24.824 256+0 records out 00:18:24.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241464 s, 4.3 MB/s 00:18:24.824 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.824 04:40:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:25.085 256+0 records in 00:18:25.085 256+0 records out 00:18:25.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259287 s, 4.0 MB/s 00:18:25.085 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.085 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:18:25.346 256+0 records in 00:18:25.346 256+0 records out 00:18:25.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240643 s, 4.4 MB/s 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.346 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.648 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.908 04:40:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.169 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.430 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.690 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.950 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.951 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:26.951 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:26.951 04:40:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:27.211 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:27.212 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:27.472 malloc_lvol_verify 00:18:27.472 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:27.472 244caf90-c2d5-422e-9942-efcb7df741d4 00:18:27.472 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:27.734 186eff0b-b8be-470d-abb2-c8438ebeece6 00:18:27.734 04:40:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:27.995 /dev/nbd0 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:27.995 mke2fs 1.47.0 (5-Feb-2023) 00:18:27.995 Discarding device blocks: 0/4096 done 00:18:27.995 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:27.995 00:18:27.995 Allocating group tables: 0/1 done 00:18:27.995 Writing inode tables: 0/1 done 00:18:27.995 Creating journal (1024 blocks): done 00:18:27.995 Writing superblocks and filesystem accounting information: 0/1 done 00:18:27.995 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.995 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:27.996 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.996 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61531 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61531 ']' 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61531 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61531 00:18:28.257 killing process with pid 61531 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61531' 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61531 00:18:28.257 04:40:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61531 00:18:29.200 ************************************ 00:18:29.200 END TEST bdev_nbd 00:18:29.200 ************************************ 00:18:29.200 04:40:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:29.200 00:18:29.200 real 0m12.068s 00:18:29.200 user 0m16.240s 00:18:29.200 sys 0m4.009s 00:18:29.200 04:40:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.200 04:40:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:18:29.200 skipping fio tests on NVMe due to multi-ns failures. 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:29.200 04:40:36 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:29.200 04:40:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:29.200 04:40:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.200 04:40:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:29.200 ************************************ 00:18:29.200 START TEST bdev_verify 00:18:29.200 ************************************ 00:18:29.200 04:40:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:29.200 [2024-11-27 04:40:36.271810] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:29.200 [2024-11-27 04:40:36.271944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:18:29.465 [2024-11-27 04:40:36.436976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.465 [2024-11-27 04:40:36.573113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.465 [2024-11-27 04:40:36.573122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.038 Running I/O for 5 seconds... 00:18:32.422 16128.00 IOPS, 63.00 MiB/s [2024-11-27T04:40:40.570Z] 17856.00 IOPS, 69.75 MiB/s [2024-11-27T04:40:41.513Z] 17429.33 IOPS, 68.08 MiB/s [2024-11-27T04:40:42.455Z] 17696.00 IOPS, 69.12 MiB/s [2024-11-27T04:40:42.455Z] 17894.40 IOPS, 69.90 MiB/s 00:18:35.252 Latency(us) 00:18:35.252 [2024-11-27T04:40:42.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.252 Verification LBA range: start 0x0 length 0xbd0bd 00:18:35.252 Nvme0n1 : 5.06 1239.10 4.84 0.00 0.00 102863.44 24702.03 93968.54 00:18:35.252 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.252 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:35.252 Nvme0n1 : 5.07 1273.93 4.98 0.00 0.00 99964.49 7813.91 98404.82 00:18:35.252 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.252 Verification LBA range: start 0x0 length 0x4ff80 00:18:35.253 Nvme1n1p1 : 5.06 1238.73 4.84 0.00 0.00 102742.57 26214.40 88322.36 00:18:35.253 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x4ff80 length 0x4ff80 00:18:35.253 Nvme1n1p1 : 5.09 1281.30 5.01 0.00 0.00 99256.75 15426.17 82272.89 00:18:35.253 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x0 length 0x4ff7f 00:18:35.253 Nvme1n1p2 : 5.09 1245.17 4.86 0.00 0.00 101975.33 11191.53 79449.80 00:18:35.253 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:18:35.253 Nvme1n1p2 : 5.10 1280.92 5.00 0.00 0.00 99044.74 13510.50 74206.92 00:18:35.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x0 length 0x80000 00:18:35.253 Nvme2n1 : 5.09 1244.84 4.86 0.00 0.00 101836.52 11594.83 77030.01 00:18:35.253 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x80000 length 0x80000 00:18:35.253 Nvme2n1 : 5.10 1280.59 5.00 0.00 0.00 98883.97 13308.85 72593.72 00:18:35.253 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x0 length 0x80000 00:18:35.253 Nvme2n2 : 5.09 1244.48 4.86 0.00 0.00 101686.49 11494.01 78643.20 00:18:35.253 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x80000 length 0x80000 00:18:35.253 Nvme2n2 : 5.10 1280.26 5.00 0.00 0.00 98728.50 13208.02 72190.42 00:18:35.253 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x0 length 0x80000 00:18:35.253 Nvme2n3 : 5.09 1244.08 4.86 0.00 0.00 101531.76 11544.42 78643.20 00:18:35.253 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x80000 length 0x80000 00:18:35.253 Nvme2n3 : 5.10 1279.87 5.00 0.00 0.00 98597.97 12905.55 76626.71 00:18:35.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x0 length 0x20000 00:18:35.253 Nvme3n1 : 5.10 1254.22 4.90 0.00 0.00 100762.74 7410.61 79449.80 00:18:35.253 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.253 Verification LBA range: start 0x20000 length 0x20000 00:18:35.253 Nvme3n1 : 5.10 1279.52 5.00 0.00 0.00 98496.27 13409.67 79853.10 00:18:35.253 [2024-11-27T04:40:42.456Z] =================================================================================================================== 00:18:35.253 [2024-11-27T04:40:42.456Z] Total : 17667.01 69.01 0.00 0.00 100430.53 7410.61 98404.82 00:18:36.641 00:18:36.641 real 0m7.286s 00:18:36.641 user 0m13.471s 00:18:36.641 sys 0m0.296s 00:18:36.641 04:40:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.641 ************************************ 00:18:36.641 END TEST bdev_verify 00:18:36.641 ************************************ 00:18:36.641 04:40:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:36.641 04:40:43 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:36.641 04:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:36.641 04:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.641 04:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:36.641 ************************************ 00:18:36.641 START TEST bdev_verify_big_io 00:18:36.641 ************************************ 00:18:36.641 04:40:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:36.641 [2024-11-27 04:40:43.619314] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:36.641 [2024-11-27 04:40:43.619439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62050 ] 00:18:36.641 [2024-11-27 04:40:43.776714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.902 [2024-11-27 04:40:43.881661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.902 [2024-11-27 04:40:43.881854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.473 Running I/O for 5 seconds... 00:18:43.656 1238.00 IOPS, 77.38 MiB/s [2024-11-27T04:40:51.120Z] 2361.50 IOPS, 147.59 MiB/s [2024-11-27T04:40:51.120Z] 2792.67 IOPS, 174.54 MiB/s 00:18:43.917 Latency(us) 00:18:43.917 [2024-11-27T04:40:51.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.917 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0xbd0b 00:18:43.917 Nvme0n1 : 6.07 85.97 5.37 0.00 0.00 1398001.28 17543.48 1587382.74 00:18:43.917 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:43.917 Nvme0n1 : 5.96 76.38 4.77 0.00 0.00 1577326.43 28835.84 2013265.92 00:18:43.917 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x4ff8 00:18:43.917 Nvme1n1p1 : 6.07 94.82 5.93 0.00 0.00 1243322.25 109697.18 1329271.73 00:18:43.917 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x4ff8 length 0x4ff8 00:18:43.917 Nvme1n1p1 : 6.08 73.72 4.61 0.00 0.00 1609395.82 107277.39 2374621.34 00:18:43.917 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x4ff7 00:18:43.917 Nvme1n1p2 : 6.17 90.88 5.68 0.00 0.00 1231129.30 109697.18 1406705.03 00:18:43.917 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x4ff7 length 0x4ff7 00:18:43.917 Nvme1n1p2 : 6.08 93.61 5.85 0.00 0.00 1213500.97 115343.36 1355082.83 00:18:43.917 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x8000 00:18:43.917 Nvme2n1 : 6.28 89.85 5.62 0.00 0.00 1201080.38 95985.03 2064888.12 00:18:43.917 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x8000 length 0x8000 00:18:43.917 Nvme2n1 : 6.17 98.39 6.15 0.00 0.00 1121954.06 87112.47 1193763.45 00:18:43.917 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x8000 00:18:43.917 Nvme2n2 : 6.28 93.16 5.82 0.00 0.00 1122376.86 96388.33 2090699.22 00:18:43.917 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x8000 length 0x8000 00:18:43.917 Nvme2n2 : 6.28 101.98 6.37 0.00 0.00 1039037.99 103244.41 1226027.32 00:18:43.917 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x8000 00:18:43.917 Nvme2n3 : 6.38 103.38 6.46 0.00 0.00 978022.07 20467.40 2116510.33 00:18:43.917 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x8000 length 0x8000 00:18:43.917 Nvme2n3 : 6.35 110.93 6.93 0.00 0.00 924453.67 21979.77 1245385.65 00:18:43.917 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x0 length 0x2000 00:18:43.917 Nvme3n1 : 6.43 127.31 7.96 0.00 0.00 768576.87 1190.99 2142321.43 00:18:43.917 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.917 Verification LBA range: start 0x2000 length 0x2000 00:18:43.917 Nvme3n1 : 6.39 124.80 7.80 0.00 0.00 791554.19 3327.21 1264743.98 00:18:43.917 [2024-11-27T04:40:51.120Z] =================================================================================================================== 00:18:43.917 [2024-11-27T04:40:51.120Z] Total : 1365.19 85.32 0.00 0.00 1117050.73 1190.99 2374621.34 00:18:45.304 00:18:45.304 real 0m8.898s 00:18:45.304 user 0m16.817s 00:18:45.304 sys 0m0.250s 00:18:45.304 04:40:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.304 04:40:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.304 ************************************ 00:18:45.304 END TEST bdev_verify_big_io 00:18:45.304 ************************************ 00:18:45.304 04:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.304 04:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:45.304 04:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.304 04:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:45.566 ************************************ 00:18:45.566 START TEST bdev_write_zeroes 00:18:45.566 ************************************ 00:18:45.566 04:40:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.566 [2024-11-27 04:40:52.583799] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:45.566 [2024-11-27 04:40:52.583923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62165 ] 00:18:45.566 [2024-11-27 04:40:52.744394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.827 [2024-11-27 04:40:52.847246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.398 Running I/O for 1 seconds... 00:18:47.442 55488.00 IOPS, 216.75 MiB/s 00:18:47.442 Latency(us) 00:18:47.442 [2024-11-27T04:40:54.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.442 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme0n1 : 1.02 7944.70 31.03 0.00 0.00 16072.33 10788.23 25105.33 00:18:47.442 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme1n1p1 : 1.02 7934.63 30.99 0.00 0.00 16070.46 11040.30 25105.33 00:18:47.442 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme1n1p2 : 1.03 7924.38 30.95 0.00 0.00 16005.86 7410.61 24298.73 00:18:47.442 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme2n1 : 1.03 7914.88 30.92 0.00 0.00 15994.11 6856.07 23794.61 00:18:47.442 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme2n2 : 1.03 7905.91 30.88 0.00 0.00 15990.84 6856.07 23290.49 00:18:47.442 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme2n3 : 1.03 7896.82 30.85 0.00 0.00 15984.13 6654.42 23693.78 00:18:47.442 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.442 Nvme3n1 : 1.03 7825.76 30.57 0.00 0.00 16107.47 10939.47 25306.98 00:18:47.442 [2024-11-27T04:40:54.645Z] =================================================================================================================== 00:18:47.442 [2024-11-27T04:40:54.645Z] Total : 55347.07 216.20 0.00 0.00 16032.09 6654.42 25306.98 00:18:48.382 00:18:48.382 real 0m2.719s 00:18:48.382 user 0m2.411s 00:18:48.382 sys 0m0.191s 00:18:48.382 04:40:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.382 ************************************ 00:18:48.382 END TEST bdev_write_zeroes 00:18:48.382 ************************************ 00:18:48.382 04:40:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 04:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.383 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:48.383 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.383 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:48.383 ************************************ 00:18:48.383 START TEST bdev_json_nonenclosed 00:18:48.383 ************************************ 00:18:48.383 04:40:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.383 [2024-11-27 04:40:55.361099] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:48.383 [2024-11-27 04:40:55.361225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:18:48.383 [2024-11-27 04:40:55.523630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.642 [2024-11-27 04:40:55.629263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.642 [2024-11-27 04:40:55.629533] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:48.642 [2024-11-27 04:40:55.629558] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:48.642 [2024-11-27 04:40:55.629568] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:48.642 00:18:48.642 real 0m0.515s 00:18:48.642 user 0m0.312s 00:18:48.642 sys 0m0.097s 00:18:48.642 ************************************ 00:18:48.642 END TEST bdev_json_nonenclosed 00:18:48.642 ************************************ 00:18:48.642 04:40:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.642 04:40:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:48.903 04:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.903 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:48.903 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.903 04:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:48.903 ************************************ 00:18:48.903 START TEST bdev_json_nonarray 00:18:48.903 ************************************ 00:18:48.903 04:40:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.903 [2024-11-27 04:40:55.936181] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:48.903 [2024-11-27 04:40:55.936451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62238 ] 00:18:48.903 [2024-11-27 04:40:56.093873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.164 [2024-11-27 04:40:56.196465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.164 [2024-11-27 04:40:56.196564] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:49.164 [2024-11-27 04:40:56.196581] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:49.164 [2024-11-27 04:40:56.196590] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:49.425 ************************************ 00:18:49.425 END TEST bdev_json_nonarray 00:18:49.425 ************************************ 00:18:49.425 00:18:49.425 real 0m0.508s 00:18:49.425 user 0m0.310s 00:18:49.425 sys 0m0.093s 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:49.425 04:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:18:49.425 04:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:18:49.425 04:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:18:49.425 04:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.425 04:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.425 04:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:49.425 ************************************ 00:18:49.425 START TEST bdev_gpt_uuid 00:18:49.425 ************************************ 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62269 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62269 00:18:49.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62269 ']' 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.425 04:40:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:49.425 [2024-11-27 04:40:56.528018] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:49.425 [2024-11-27 04:40:56.528162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:18:49.683 [2024-11-27 04:40:56.688648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.683 [2024-11-27 04:40:56.789291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.248 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.248 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:18:50.248 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:50.248 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.248 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.815 Some configs were skipped because the RPC state that can call them passed over. 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:18:50.815 { 00:18:50.815 "name": "Nvme1n1p1", 00:18:50.815 "aliases": [ 00:18:50.815 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:18:50.815 ], 00:18:50.815 "product_name": "GPT Disk", 00:18:50.815 "block_size": 4096, 00:18:50.815 "num_blocks": 655104, 00:18:50.815 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:50.815 "assigned_rate_limits": { 00:18:50.815 "rw_ios_per_sec": 0, 00:18:50.815 "rw_mbytes_per_sec": 0, 00:18:50.815 "r_mbytes_per_sec": 0, 00:18:50.815 "w_mbytes_per_sec": 0 00:18:50.815 }, 00:18:50.815 "claimed": false, 00:18:50.815 "zoned": false, 00:18:50.815 "supported_io_types": { 00:18:50.815 "read": true, 00:18:50.815 "write": true, 00:18:50.815 "unmap": true, 00:18:50.815 "flush": true, 00:18:50.815 "reset": true, 00:18:50.815 "nvme_admin": false, 00:18:50.815 "nvme_io": false, 00:18:50.815 "nvme_io_md": false, 00:18:50.815 "write_zeroes": true, 00:18:50.815 "zcopy": false, 00:18:50.815 "get_zone_info": false, 00:18:50.815 "zone_management": false, 00:18:50.815 "zone_append": false, 00:18:50.815 "compare": true, 00:18:50.815 "compare_and_write": false, 00:18:50.815 "abort": true, 00:18:50.815 "seek_hole": false, 00:18:50.815 "seek_data": false, 00:18:50.815 "copy": true, 00:18:50.815 "nvme_iov_md": false 00:18:50.815 }, 00:18:50.815 "driver_specific": { 00:18:50.815 "gpt": { 00:18:50.815 "base_bdev": "Nvme1n1", 00:18:50.815 "offset_blocks": 256, 00:18:50.815 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:18:50.815 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:50.815 "partition_name": "SPDK_TEST_first" 00:18:50.815 } 00:18:50.815 } 00:18:50.815 } 00:18:50.815 ]' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:18:50.815 { 00:18:50.815 "name": "Nvme1n1p2", 00:18:50.815 "aliases": [ 00:18:50.815 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:18:50.815 ], 00:18:50.815 "product_name": "GPT Disk", 00:18:50.815 "block_size": 4096, 00:18:50.815 "num_blocks": 655103, 00:18:50.815 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:50.815 "assigned_rate_limits": { 00:18:50.815 "rw_ios_per_sec": 0, 00:18:50.815 "rw_mbytes_per_sec": 0, 00:18:50.815 "r_mbytes_per_sec": 0, 00:18:50.815 "w_mbytes_per_sec": 0 00:18:50.815 }, 00:18:50.815 "claimed": false, 00:18:50.815 "zoned": false, 00:18:50.815 "supported_io_types": { 00:18:50.815 "read": true, 00:18:50.815 "write": true, 00:18:50.815 "unmap": true, 00:18:50.815 "flush": true, 00:18:50.815 "reset": true, 00:18:50.815 "nvme_admin": false, 00:18:50.815 "nvme_io": false, 00:18:50.815 "nvme_io_md": false, 00:18:50.815 "write_zeroes": true, 00:18:50.815 "zcopy": false, 00:18:50.815 "get_zone_info": false, 00:18:50.815 "zone_management": false, 00:18:50.815 "zone_append": false, 00:18:50.815 "compare": true, 00:18:50.815 "compare_and_write": false, 00:18:50.815 "abort": true, 00:18:50.815 "seek_hole": false, 00:18:50.815 "seek_data": false, 00:18:50.815 "copy": true, 00:18:50.815 "nvme_iov_md": false 00:18:50.815 }, 00:18:50.815 "driver_specific": { 00:18:50.815 "gpt": { 00:18:50.815 "base_bdev": "Nvme1n1", 00:18:50.815 "offset_blocks": 655360, 00:18:50.815 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:18:50.815 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:50.815 "partition_name": "SPDK_TEST_second" 00:18:50.815 } 00:18:50.815 } 00:18:50.815 } 00:18:50.815 ]' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62269 00:18:50.815 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62269 ']' 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62269 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62269 00:18:50.816 killing process with pid 62269 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62269' 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62269 00:18:50.816 04:40:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62269 00:18:52.712 ************************************ 00:18:52.712 END TEST bdev_gpt_uuid 00:18:52.712 ************************************ 00:18:52.712 00:18:52.712 real 0m3.036s 00:18:52.712 user 0m3.222s 00:18:52.712 sys 0m0.374s 00:18:52.712 04:40:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.712 04:40:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:18:52.712 04:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:52.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.968 Waiting for block devices as requested 00:18:52.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:52.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:52.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:52.968 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.227 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:58.227 04:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:18:58.227 04:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:18:58.485 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:18:58.485 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:18:58.486 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:18:58.486 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:18:58.486 04:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:18:58.486 00:18:58.486 real 0m58.080s 00:18:58.486 user 1m13.742s 00:18:58.486 sys 0m8.380s 00:18:58.486 04:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.486 04:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:58.486 ************************************ 00:18:58.486 END TEST blockdev_nvme_gpt 00:18:58.486 ************************************ 00:18:58.486 04:41:05 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:58.486 04:41:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.486 04:41:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.486 04:41:05 -- common/autotest_common.sh@10 -- # set +x 00:18:58.486 ************************************ 00:18:58.486 START TEST nvme 00:18:58.486 ************************************ 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:58.486 * Looking for test storage... 00:18:58.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.486 04:41:05 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.486 04:41:05 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.486 04:41:05 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.486 04:41:05 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.486 04:41:05 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.486 04:41:05 nvme -- scripts/common.sh@344 -- # case "$op" in 00:18:58.486 04:41:05 nvme -- scripts/common.sh@345 -- # : 1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.486 04:41:05 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.486 04:41:05 nvme -- scripts/common.sh@365 -- # decimal 1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@353 -- # local d=1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.486 04:41:05 nvme -- scripts/common.sh@355 -- # echo 1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.486 04:41:05 nvme -- scripts/common.sh@366 -- # decimal 2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@353 -- # local d=2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.486 04:41:05 nvme -- scripts/common.sh@355 -- # echo 2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.486 04:41:05 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.486 04:41:05 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.486 04:41:05 nvme -- scripts/common.sh@368 -- # return 0 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.486 --rc genhtml_branch_coverage=1 00:18:58.486 --rc genhtml_function_coverage=1 00:18:58.486 --rc genhtml_legend=1 00:18:58.486 --rc geninfo_all_blocks=1 00:18:58.486 --rc geninfo_unexecuted_blocks=1 00:18:58.486 00:18:58.486 ' 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.486 --rc genhtml_branch_coverage=1 00:18:58.486 --rc genhtml_function_coverage=1 00:18:58.486 --rc genhtml_legend=1 00:18:58.486 --rc geninfo_all_blocks=1 00:18:58.486 --rc geninfo_unexecuted_blocks=1 00:18:58.486 00:18:58.486 ' 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.486 --rc genhtml_branch_coverage=1 00:18:58.486 --rc genhtml_function_coverage=1 00:18:58.486 --rc genhtml_legend=1 00:18:58.486 --rc geninfo_all_blocks=1 00:18:58.486 --rc geninfo_unexecuted_blocks=1 00:18:58.486 00:18:58.486 ' 00:18:58.486 04:41:05 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.486 --rc genhtml_branch_coverage=1 00:18:58.486 --rc genhtml_function_coverage=1 00:18:58.486 --rc genhtml_legend=1 00:18:58.486 --rc geninfo_all_blocks=1 00:18:58.486 --rc geninfo_unexecuted_blocks=1 00:18:58.486 00:18:58.486 ' 00:18:58.486 04:41:05 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:59.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:59.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.368 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.625 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.625 04:41:06 nvme -- nvme/nvme.sh@79 -- # uname 00:18:59.625 04:41:06 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:18:59.625 04:41:06 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:18:59.625 04:41:06 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:18:59.626 Waiting for stub to ready for secondary processes... 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1075 -- # stubpid=62904 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62904 ]] 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:18:59.626 04:41:06 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:18:59.626 [2024-11-27 04:41:06.689301] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:18:59.626 [2024-11-27 04:41:06.689577] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:19:00.559 [2024-11-27 04:41:07.426739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.559 [2024-11-27 04:41:07.520829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.559 [2024-11-27 04:41:07.521322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.559 [2024-11-27 04:41:07.521343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.559 [2024-11-27 04:41:07.536179] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:19:00.559 [2024-11-27 04:41:07.536623] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:00.559 [2024-11-27 04:41:07.547843] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:19:00.559 [2024-11-27 04:41:07.548128] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:19:00.559 [2024-11-27 04:41:07.551350] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:00.559 [2024-11-27 04:41:07.551754] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:19:00.559 [2024-11-27 04:41:07.551944] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:19:00.559 [2024-11-27 04:41:07.554723] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:00.559 [2024-11-27 04:41:07.555029] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:19:00.559 [2024-11-27 04:41:07.555231] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:19:00.559 [2024-11-27 04:41:07.558190] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:00.559 [2024-11-27 04:41:07.558503] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:19:00.559 [2024-11-27 04:41:07.558684] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:19:00.559 [2024-11-27 04:41:07.558832] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:19:00.560 [2024-11-27 04:41:07.558972] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:19:00.560 04:41:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:00.560 04:41:07 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:19:00.560 done. 00:19:00.560 04:41:07 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:00.560 04:41:07 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:19:00.560 04:41:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.560 04:41:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.560 ************************************ 00:19:00.560 START TEST nvme_reset 00:19:00.560 ************************************ 00:19:00.560 04:41:07 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:00.817 Initializing NVMe Controllers 00:19:00.817 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:00.817 Skipping QEMU NVMe SSD at 0000:00:11.0 00:19:00.817 Skipping QEMU NVMe SSD at 0000:00:13.0 00:19:00.817 Skipping QEMU NVMe SSD at 0000:00:12.0 00:19:00.817 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:00.817 00:19:00.817 real 0m0.268s 00:19:00.817 user 0m0.088s 00:19:00.817 sys 0m0.131s 00:19:00.817 04:41:07 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.817 ************************************ 00:19:00.817 END TEST nvme_reset 00:19:00.817 ************************************ 00:19:00.817 04:41:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:00.817 04:41:07 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:00.817 04:41:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:00.817 04:41:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.817 04:41:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.817 ************************************ 00:19:00.817 START TEST nvme_identify 00:19:00.817 ************************************ 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:19:00.817 04:41:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:00.817 04:41:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:00.817 04:41:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:00.817 04:41:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:00.817 04:41:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:01.079 04:41:08 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:19:01.079 04:41:08 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:01.079 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:01.079 ===================================================== 00:19:01.079 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:01.079 ===================================================== 00:19:01.079 Controller Capabilities/Features 00:19:01.079 ================================ 00:19:01.079 Vendor ID: 1b36 00:19:01.079 Subsystem Vendor ID: 1af4 00:19:01.079 Serial Number: 12340 00:19:01.079 Model Number: QEMU NVMe Ctrl 00:19:01.079 Firmware Version: 8.0.0 00:19:01.079 Recommended Arb Burst: 6 00:19:01.079 IEEE OUI Identifier: 00 54 52 00:19:01.079 Multi-path I/O 00:19:01.079 May have multiple subsystem ports: No 00:19:01.079 May have multiple controllers: No 00:19:01.079 Associated with SR-IOV VF: No 00:19:01.079 Max Data Transfer Size: 524288 00:19:01.079 Max Number of Namespaces: 256 00:19:01.079 Max Number of I/O Queues: 64 00:19:01.079 NVMe Specification Version (VS): 1.4 00:19:01.079 NVMe Specification Version (Identify): 1.4 00:19:01.079 Maximum Queue Entries: 2048 00:19:01.079 Contiguous Queues Required: Yes 00:19:01.079 Arbitration Mechanisms Supported 00:19:01.079 Weighted Round Robin: Not Supported 00:19:01.079 Vendor Specific: Not Supported 00:19:01.079 Reset Timeout: 7500 ms 00:19:01.079 Doorbell Stride: 4 bytes 00:19:01.079 NVM Subsystem Reset: Not Supported 00:19:01.079 Command Sets Supported 00:19:01.079 NVM Command Set: Supported 00:19:01.079 Boot Partition: Not Supported 00:19:01.079 Memory Page Size Minimum: 4096 bytes 00:19:01.079 Memory Page Size Maximum: 65536 bytes 00:19:01.079 Persistent Memory Region: Not Supported 00:19:01.079 Optional Asynchronous Events Supported 00:19:01.079 Namespace Attribute Notices: Supported 00:19:01.079 Firmware Activation Notices: Not Supported 00:19:01.079 ANA Change Notices: Not Supported 00:19:01.079 PLE Aggregate Log Change Notices: Not Supported 00:19:01.079 LBA Status Info Alert Notices: Not Supported 00:19:01.079 EGE Aggregate Log Change Notices: Not Supported 00:19:01.079 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.079 Zone Descriptor Change Notices: Not Supported 00:19:01.079 Discovery Log Change Notices: Not Supported 00:19:01.079 Controller Attributes 00:19:01.079 128-bit Host Identifier: Not Supported 00:19:01.079 Non-Operational Permissive Mode: Not Supported 00:19:01.079 NVM Sets: Not Supported 00:19:01.079 Read Recovery Levels: Not Supported 00:19:01.079 Endurance Groups: Not Supported 00:19:01.079 Predictable Latency Mode: Not Supported 00:19:01.079 Traffic Based Keep ALive: Not Supported 00:19:01.079 Namespace Granularity: Not Supported 00:19:01.079 SQ Associations: Not Supported 00:19:01.079 UUID List: Not Supported 00:19:01.079 Multi-Domain Subsystem: Not Supported 00:19:01.079 Fixed Capacity Management: Not Supported 00:19:01.079 Variable Capacity Management: Not Supported 00:19:01.079 Delete Endurance Group: Not Supported 00:19:01.079 Delete NVM Set: Not Supported 00:19:01.079 Extended LBA Formats Supported: Supported 00:19:01.079 Flexible Data Placement Supported: Not Supported 00:19:01.079 00:19:01.079 Controller Memory Buffer Support 00:19:01.079 ================================ 00:19:01.079 Supported: No 00:19:01.079 00:19:01.079 Persistent Memory Region Support 00:19:01.079 ================================ 00:19:01.079 Supported: No 00:19:01.079 00:19:01.079 Admin Command Set Attributes 00:19:01.079 ============================ 00:19:01.079 Security Send/Receive: Not Supported 00:19:01.079 Format NVM: Supported 00:19:01.079 Firmware Activate/Download: Not Supported 00:19:01.079 Namespace Management: Supported 00:19:01.079 Device Self-Test: Not Supported 00:19:01.079 Directives: Supported 00:19:01.079 NVMe-MI: Not Supported 00:19:01.079 Virtualization Management: Not Supported 00:19:01.079 Doorbell Buffer Config: Supported 00:19:01.079 Get LBA Status Capability: Not Supported 00:19:01.079 Command & Feature Lockdown Capability: Not Supported 00:19:01.079 Abort Command Limit: 4 00:19:01.079 Async Event Request Limit: 4 00:19:01.079 Number of Firmware Slots: N/A 00:19:01.079 Firmware Slot 1 Read-Only: N/A 00:19:01.079 Firmware Activation Without Reset: N/A 00:19:01.079 Multiple Update Detection Support: N/A 00:19:01.079 Firmware Update Granularity: No Information Provided 00:19:01.079 Per-Namespace SMART Log: Yes 00:19:01.079 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.079 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:01.079 Command Effects Log Page: Supported 00:19:01.079 Get Log Page Extended Data: Supported 00:19:01.079 Telemetry Log Pages: Not Supported 00:19:01.079 Persistent Event Log Pages: Not Supported 00:19:01.079 Supported Log Pages Log Page: May Support 00:19:01.079 Commands Supported & Effects Log Page: Not Supported 00:19:01.079 Feature Identifiers & Effects Log Page:May Support 00:19:01.079 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.079 Data Area 4 for Telemetry Log: Not Supported 00:19:01.079 Error Log Page Entries Supported: 1 00:19:01.079 Keep Alive: Not Supported 00:19:01.079 00:19:01.079 NVM Command Set Attributes 00:19:01.079 ========================== 00:19:01.079 Submission Queue Entry Size 00:19:01.079 Max: 64 00:19:01.079 Min: 64 00:19:01.079 Completion Queue Entry Size 00:19:01.079 Max: 16 00:19:01.079 Min: 16 00:19:01.079 Number of Namespaces: 256 00:19:01.079 Compare Command: Supported 00:19:01.079 Write Uncorrectable Command: Not Supported 00:19:01.079 Dataset Management Command: Supported 00:19:01.079 Write Zeroes Command: Supported 00:19:01.079 Set Features Save Field: Supported 00:19:01.079 Reservations: Not Supported 00:19:01.079 Timestamp: Supported 00:19:01.079 Copy: Supported 00:19:01.079 Volatile Write Cache: Present 00:19:01.079 Atomic Write Unit (Normal): 1 00:19:01.079 Atomic Write Unit (PFail): 1 00:19:01.079 Atomic Compare & Write Unit: 1 00:19:01.079 Fused Compare & Write: Not Supported 00:19:01.079 Scatter-Gather List 00:19:01.079 SGL Command Set: Supported 00:19:01.079 SGL Keyed: Not Supported 00:19:01.079 SGL Bit Bucket Descriptor: Not Supported 00:19:01.079 SGL Metadata Pointer: Not Supported 00:19:01.079 Oversized SGL: Not Supported 00:19:01.079 SGL Metadata Address: Not Supported 00:19:01.079 SGL Offset: Not Supported 00:19:01.079 Transport SGL Data Block: Not Supported 00:19:01.079 Replay Protected Memory Block: Not Supported 00:19:01.079 00:19:01.079 Firmware Slot Information 00:19:01.079 ========================= 00:19:01.079 Active slot: 1 00:19:01.079 Slot 1 Firmware Revision: 1.0 00:19:01.079 00:19:01.079 00:19:01.079 Commands Supported and Effects 00:19:01.079 ============================== 00:19:01.079 Admin Commands 00:19:01.079 -------------- 00:19:01.079 Delete I/O Submission Queue (00h): Supported 00:19:01.079 Create I/O Submission Queue (01h): Supported 00:19:01.079 Get Log Page (02h): Supported 00:19:01.079 Delete I/O Completion Queue (04h): Supported 00:19:01.079 Create I/O Completion Queue (05h): Supported 00:19:01.079 Identify (06h): Supported 00:19:01.079 Abort (08h): Supported 00:19:01.079 Set Features (09h): Supported 00:19:01.079 Get Features (0Ah): Supported 00:19:01.079 Asynchronous Event Request (0Ch): Supported 00:19:01.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.079 Directive Send (19h): Supported 00:19:01.079 Directive Receive (1Ah): Supported 00:19:01.079 Virtualization Management (1Ch): Supported 00:19:01.079 Doorbell Buffer Config (7Ch): Supported 00:19:01.079 Format NVM (80h): Supported LBA-Change 00:19:01.079 I/O Commands 00:19:01.079 ------------ 00:19:01.079 Flush (00h): Supported LBA-Change 00:19:01.079 Write (01h): Supported LBA-Change 00:19:01.079 Read (02h): Supported 00:19:01.079 Compare (05h): Supported 00:19:01.079 Write Zeroes (08h): Supported LBA-Change 00:19:01.079 Dataset Management (09h): Supported LBA-Change 00:19:01.079 Unknown (0Ch): Supported 00:19:01.079 Unknown (12h): Supported 00:19:01.079 Copy (19h): Supported LBA-Change 00:19:01.079 Unknown (1Dh): Supported LBA-Change 00:19:01.080 00:19:01.080 Error Log 00:19:01.080 ========= 00:19:01.080 00:19:01.080 Arbitration 00:19:01.080 =========== 00:19:01.080 Arbitration Burst: no limit 00:19:01.080 00:19:01.080 Power Management 00:19:01.080 ================ 00:19:01.080 Number of Power States: 1 00:19:01.080 Current Power State: Power State #0 00:19:01.080 Power State #0: 00:19:01.080 Max Power: 25.00 W 00:19:01.080 Non-Operational State: Operational 00:19:01.080 Entry Latency: 16 microseconds 00:19:01.080 Exit Latency: 4 microseconds 00:19:01.080 Relative Read Throughput: 0 00:19:01.080 Relative Read Latency: 0 00:19:01.080 Relative Write Throughput: 0 00:19:01.080 Relative Write Latency: 0 00:19:01.080 Idle Power: Not Reported 00:19:01.080 Active Power: Not Reported 00:19:01.080 Non-Operational Permissive Mode: Not Supported 00:19:01.080 00:19:01.080 Health Information 00:19:01.080 ================== 00:19:01.080 Critical Warnings: 00:19:01.080 Available Spare Space: OK 00:19:01.080 Temperature: OK 00:19:01.080 Device Reliability: OK 00:19:01.080 Read Only: No 00:19:01.080 Volatile Memory Backup: OK 00:19:01.080 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.080 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.080 Available Spare: 0% 00:19:01.080 Available Spare Threshold: 0% 00:19:01.080 Life Percentage Used: 0% 00:19:01.080 Data Units Read: 597 00:19:01.080 Data Units Written: 525 00:19:01.080 Host Read Commands: 31381 00:19:01.080 Host Write Commands: 31167 00:19:01.080 Controller Busy Time: 0 minutes 00:19:01.080 Power Cycles: 0 00:19:01.080 Power On Hours: 0 hours 00:19:01.080 Unsafe Shutdowns: 0 00:19:01.080 Unrecoverable Media Errors: 0 00:19:01.080 Lifetime Error Log Entries: 0 00:19:01.080 Warning Temperature Time: 0 minutes 00:19:01.080 Critical Temperature Time: 0 minutes 00:19:01.080 00:19:01.080 Number of Queues 00:19:01.080 ================ 00:19:01.080 Number of I/O Submission Queues: 64 00:19:01.080 Number of I/O Completion Queues: 64 00:19:01.080 00:19:01.080 ZNS Specific Controller Data 00:19:01.080 ============================ 00:19:01.080 Zone Append Size Limit: 0 00:19:01.080 00:19:01.080 00:19:01.080 Active Namespaces 00:19:01.080 ================= 00:19:01.080 Namespace ID:1 00:19:01.080 Error Recovery Timeout: Unlimited 00:19:01.080 Command Set Identifier: NVM (00h) 00:19:01.080 Deallocate: Supported 00:19:01.080 Deallocated/Unwritten Error: Supported 00:19:01.080 Deallocated Read Value: All 0x00 00:19:01.080 Deallocate in Write Zeroes: Not Supported 00:19:01.080 Deallocated Guard Field: 0xFFFF 00:19:01.080 Flush: Supported 00:19:01.080 Reservation: Not Supported 00:19:01.080 Metadata Transferred as: Separate Metadata Buffer 00:19:01.080 Namespace Sharing Capabilities: Private 00:19:01.080 Size (in LBAs): 1548666 (5GiB) 00:19:01.080 Capacity (in LBAs): 1548666 (5GiB) 00:19:01.080 Utilization (in LBAs): 1548666 (5GiB) 00:19:01.080 Thin Provisioning: Not Supported 00:19:01.080 Per-NS Atomic Units: No 00:19:01.080 Maximum Single Source Range Length: 128 00:19:01.080 Maximum Copy Length: 128 00:19:01.080 Maximum Source Range Count: 128 00:19:01.080 NGUID/EUI64 Never Reused: No 00:19:01.080 Namespace Write Protected: No 00:19:01.080 Number of LBA Formats: 8 00:19:01.080 Current LBA Format: LBA Format #07 00:19:01.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.080 00:19:01.080 NVM Specific Namespace Data 00:19:01.080 =========================== 00:19:01.080 Logical Block Storage Tag Mask: 0 00:19:01.080 Protection Information Capabilities: 00:19:01.080 16b Guard Protection Information Storage Tag Support: No 00:19:01.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.080 Storage Tag Check Read Support: No 00:19:01.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.080 ===================================================== 00:19:01.080 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:01.080 ===================================================== 00:19:01.080 Controller Capabilities/Features 00:19:01.080 ================================ 00:19:01.080 Vendor ID: 1b36 00:19:01.080 Subsystem Vendor ID: 1af4 00:19:01.080 Serial Number: 12341 00:19:01.080 Model Number: QEMU NVMe Ctrl 00:19:01.080 Firmware Version: 8.0.0 00:19:01.080 Recommended Arb Burst: 6 00:19:01.080 IEEE OUI Identifier: 00 54 52 00:19:01.080 Multi-path I/O 00:19:01.080 May have multiple subsystem ports: No 00:19:01.080 May have multiple controllers: No 00:19:01.080 Associated with SR-IOV VF: No 00:19:01.080 Max Data Transfer Size: 524288 00:19:01.080 Max Number of Namespaces: 256 00:19:01.080 Max Number of I/O Queues: 64 00:19:01.080 NVMe Specification Version (VS): 1.4 00:19:01.080 NVMe Specification Version (Identify): 1.4 00:19:01.080 Maximum Queue Entries: 2048 00:19:01.080 Contiguous Queues Required: Yes 00:19:01.080 Arbitration Mechanisms Supported 00:19:01.080 Weighted Round Robin: Not Supported 00:19:01.080 Vendor Specific: Not Supported 00:19:01.080 Reset Timeout: 7500 ms 00:19:01.080 Doorbell Stride: 4 bytes 00:19:01.080 NVM Subsystem Reset: Not Supported 00:19:01.080 Command Sets Supported 00:19:01.080 NVM Command Set: Supported 00:19:01.080 Boot Partition: Not Supported 00:19:01.080 Memory Page Size Minimum: 4096 bytes 00:19:01.080 Memory Page Size Maximum: 65536 bytes 00:19:01.080 Persistent Memory Region: Not Supported 00:19:01.080 Optional Asynchronous Events Supported 00:19:01.080 Namespace Attribute Notices: Supported 00:19:01.080 Firmware Activation Notices: Not Supported 00:19:01.080 ANA Change Notices: Not Supported 00:19:01.080 PLE Aggregate Log Change Notices: Not Supported 00:19:01.080 LBA Status Info Alert Notices: Not Supported 00:19:01.080 EGE Aggregate Log Change Notices: Not Supported 00:19:01.080 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.080 Zone Descriptor Change Notices: Not Supported 00:19:01.080 Discovery Log Change Notices: Not Supported 00:19:01.080 Controller Attributes 00:19:01.080 128-bit Host Identifier: Not Supported 00:19:01.080 Non-Operational Permissive Mode: Not Supported 00:19:01.080 NVM Sets: Not Supported 00:19:01.080 Read Recovery Levels: Not Supported 00:19:01.080 Endurance Groups: Not Supported 00:19:01.080 Predictable Latency Mode: Not Supported 00:19:01.081 Traffic Based Keep ALive: Not Supported 00:19:01.081 Namespace Granularity: Not Supported 00:19:01.081 SQ Associations: Not Supported 00:19:01.081 UUID List: Not Supported 00:19:01.081 Multi-Domain Subsystem: Not Supported 00:19:01.081 Fixed Capacity Management: Not Supported 00:19:01.081 Variable Capacity Management: Not Supported 00:19:01.081 Delete Endurance Group: Not Supported 00:19:01.081 Delete NVM Set: Not Supported 00:19:01.081 Extended LBA Formats Supported: Supported 00:19:01.081 Flexible Data Placement Supported: Not Supported 00:19:01.081 00:19:01.081 Controller Memory Buffer Support 00:19:01.081 ================================ 00:19:01.081 Supported: No 00:19:01.081 00:19:01.081 Persistent Memory Region Support 00:19:01.081 ================================ 00:19:01.081 Supported: No 00:19:01.081 00:19:01.081 Admin Command Set Attributes 00:19:01.081 ============================ 00:19:01.081 Security Send/Receive: Not Supported 00:19:01.081 Format NVM: Supported 00:19:01.081 Firmware Activate/Download: Not Supported 00:19:01.081 Namespace Management: Supported 00:19:01.081 Device Self-Test: Not Supported 00:19:01.081 Directives: Supported 00:19:01.081 NVMe-MI: Not Supported 00:19:01.081 Virtualization Management: Not Supported 00:19:01.081 Doorbell Buffer Config: Supported 00:19:01.081 Get LBA Status Capability: Not Supported 00:19:01.081 Command & Feature Lockdown Capability: Not Supported 00:19:01.081 Abort Command Limit: 4 00:19:01.081 Async Event Request Limit: 4 00:19:01.081 Number of Firmware Slots: N/A 00:19:01.081 Firmware Slot 1 Read-Only: N/A 00:19:01.081 Firmware Activation Without Reset: N/A 00:19:01.081 Multiple Update Detection Support: N/A 00:19:01.081 Firmware Update Granularity: No Information Provided 00:19:01.081 Per-Namespace SMART Log: Yes 00:19:01.081 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.081 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:01.081 Command Effects Log Page: Supported 00:19:01.081 Get Log Page Extended Data: Supported 00:19:01.081 Telemetry Log Pages: Not Supported 00:19:01.081 Persistent Event Log Pages: Not Supported 00:19:01.081 Supported Log Pages Log Page: May Support 00:19:01.081 Commands Supported & Effects Log Page: Not Supported 00:19:01.081 Feature Identifiers & Effects Log Page:May Support 00:19:01.081 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.081 Data Area 4 for Telemetry Log: Not Supported 00:19:01.081 Error Log Page Entries Supported: 1 00:19:01.081 Keep Alive: Not Supported 00:19:01.081 00:19:01.081 NVM Command Set Attributes 00:19:01.081 ========================== 00:19:01.081 Submission Queue Entry Size 00:19:01.081 Max: 64 00:19:01.081 Min: 64 00:19:01.081 Completion Queue Entry Size 00:19:01.081 Max: 16 00:19:01.081 Min: 16 00:19:01.081 Number of Namespaces: 256 00:19:01.081 Compare Command: Supported 00:19:01.081 Write Uncorrectable Command: Not Supported 00:19:01.081 Dataset Management Command: Supported 00:19:01.081 Write Zeroes Command: Supported 00:19:01.081 Set Features Save Field: Supported 00:19:01.081 Reservations: Not Supported 00:19:01.081 Timestamp: Supported 00:19:01.081 Copy: Supported 00:19:01.081 Volatile Write Cache: Present 00:19:01.081 Atomic Write Unit (Normal): 1 00:19:01.081 Atomic Write Unit (PFail): 1 00:19:01.081 Atomic Compare & Write Unit: 1 00:19:01.081 Fused Compare & Write: Not Supported 00:19:01.081 Scatter-Gather List 00:19:01.081 SGL Command Set: Supported 00:19:01.081 SGL Keyed: Not Supported 00:19:01.081 SGL Bit Bucket Descriptor: Not Supported 00:19:01.081 SGL Metadata Pointer: Not Supported 00:19:01.081 Oversized SGL: Not Supported 00:19:01.081 SGL Metadata Address: Not Supported 00:19:01.081 SGL Offset: Not Supported 00:19:01.081 Transport SGL Data Block: Not Supported 00:19:01.081 Replay Protected Memory Block: Not Supported 00:19:01.081 00:19:01.081 Firmware Slot Information 00:19:01.081 ========================= 00:19:01.081 Active slot: 1 00:19:01.081 Slot 1 Firmware Revision: 1.0 00:19:01.081 00:19:01.081 00:19:01.081 Commands Supported and Effects 00:19:01.081 ============================== 00:19:01.081 Admin Commands 00:19:01.081 -------------- 00:19:01.081 Delete I/O Submission Queue (00h): Supported 00:19:01.081 Create I/O Submission Queue (01h): Supported 00:19:01.081 Get Log Page (02h): Supported 00:19:01.081 Delete I/O Completion Queue (04h): Supported 00:19:01.081 Create I/O Completion Queue (05h): Supported 00:19:01.081 Identify (06h): Supported 00:19:01.081 Abort (08h): Supported 00:19:01.081 Set Features (09h): Supported 00:19:01.081 Get Features (0Ah): Supported 00:19:01.081 Asynchronous Event Request (0Ch): Supported 00:19:01.081 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.081 Directive Send (19h): Supported 00:19:01.081 Directive Receive (1Ah): Supported 00:19:01.081 Virtualization Management (1Ch): Supported 00:19:01.081 Doorbell Buffer Config (7Ch): Supported 00:19:01.081 Format NVM (80h): Supported LBA-Change 00:19:01.081 I/O Commands 00:19:01.081 ------------ 00:19:01.081 Flush (00h): Supported LBA-Change 00:19:01.081 Write (01h): Supported LBA-Change 00:19:01.081 Read (02h): Supported 00:19:01.081 Compare (05h): Supported 00:19:01.081 Write Zeroes (08h): Supported LBA-Change 00:19:01.081 Dataset Management (09h): Supported LBA-Change 00:19:01.081 Unknown (0Ch): Supported 00:19:01.081 Unknown (12h): Supported 00:19:01.081 Copy (19h): Supported LBA-Change 00:19:01.081 Unknown (1Dh): Supported LBA-Change 00:19:01.081 00:19:01.081 Error Log 00:19:01.081 ========= 00:19:01.081 00:19:01.081 Arbitration 00:19:01.081 =========== 00:19:01.081 Arbitration Burst: no limit 00:19:01.081 00:19:01.081 Power Management 00:19:01.081 ================ 00:19:01.081 Number of Power States: 1 00:19:01.081 Current Power State: Power State #0 00:19:01.081 Power State #0: 00:19:01.081 Max Power: 25.00 W 00:19:01.081 Non-Operational State: Operational 00:19:01.081 Entry Latency: 16 microseconds 00:19:01.081 Exit Latency: 4 microseconds 00:19:01.081 Relative Read Throughput: 0 00:19:01.081 Relative Read Latency: 0 00:19:01.081 Relative Write Throughput: 0 00:19:01.081 Relative Write Latency: 0 00:19:01.081 Idle Power: Not Reported 00:19:01.081 Active Power: Not Reported 00:19:01.081 Non-Operational Permissive Mode: Not Supported 00:19:01.081 00:19:01.081 Health Information 00:19:01.081 ================== 00:19:01.081 Critical Warnings: 00:19:01.081 Available Spare Space: OK 00:19:01.081 Temperature: OK 00:19:01.081 Device Reliability: OK 00:19:01.081 Read Only: No 00:19:01.081 Volatile Memory Backup: OK 00:19:01.081 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.081 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.081 Available Spare: 0% 00:19:01.081 Available Spare Threshold: 0% 00:19:01.081 Life Percentage Used: 0% 00:19:01.081 Data Units Read: 914 00:19:01.081 Data Units Written: 787 00:19:01.081 Host Read Commands: 46439 00:19:01.081 Host Write Commands: 45338 00:19:01.081 Controller Busy Time: 0 minutes 00:19:01.081 Power Cycles: 0 00:19:01.081 Power On Hours: 0 hours 00:19:01.081 Unsafe Shutdowns: 0 00:19:01.081 Unrecoverable Media Errors: 0 00:19:01.081 Lifetime Error Log Entries: 0 00:19:01.081 Warning Temperature Time: 0 minutes 00:19:01.081 Critical Temperature Time: 0 minutes 00:19:01.081 00:19:01.081 Number of Queues 00:19:01.081 ================ 00:19:01.081 Number of I/O Submission Queues: 64 00:19:01.081 Number of I/O Completion Queues: 64 00:19:01.081 00:19:01.081 ZNS Specific Controller Data 00:19:01.081 ============================ 00:19:01.081 Zone Append Size Limit: 0 00:19:01.081 00:19:01.081 00:19:01.081 Active Namespaces 00:19:01.081 ================= 00:19:01.081 Namespace ID:1 00:19:01.081 Error Recovery Timeout: Unlimited 00:19:01.081 Command Set Identifier: NVM (00h) 00:19:01.081 Deallocate: Supported 00:19:01.081 Deallocated/Unwritten Error: Supported 00:19:01.081 Deallocated Read Value: All 0x00 00:19:01.081 Deallocate in Write Zeroes: Not Supported 00:19:01.081 Deallocated Guard Field: 0xFFFF 00:19:01.081 Flush: Supported 00:19:01.081 Reservation: Not Supported 00:19:01.081 Namespace Sharing Capabilities: Private 00:19:01.081 Size (in LBAs): 1310720 (5GiB) 00:19:01.081 Capacity (in LBAs): 1310720 (5GiB) 00:19:01.081 Utilization (in LBAs): 1310720 (5GiB) 00:19:01.082 Thin Provisioning: Not Supported 00:19:01.082 Per-NS Atomic Units: No 00:19:01.082 Maximum Single Source Range Length: 128 00:19:01.082 Maximum Copy Length: 128 00:19:01.082 Maximum Source Range Count: 128 00:19:01.082 NGUID/EUI64 Never Reused: No 00:19:01.082 Namespace Write Protected: No 00:19:01.082 Number of LBA Formats: 8 00:19:01.082 Current LBA Format: LBA Format #04 00:19:01.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.082 00:19:01.082 NVM Specific Namespace Data 00:19:01.082 =========================== 00:19:01.082 Logical Block Storage Tag Mask: 0 00:19:01.082 Protection Information Capabilities: 00:19:01.082 16b Guard Protection Information Storage Tag Support: No 00:19:01.082 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.082 Storage Tag Check Read Support: No 00:19:01.082 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.082 ===================================================== 00:19:01.082 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:01.082 ===================================================== 00:19:01.082 Controller Capabilities/Features 00:19:01.082 ================================ 00:19:01.082 Vendor ID: 1b36 00:19:01.082 Subsystem Vendor ID: 1af4 00:19:01.082 Serial Number: 12343 00:19:01.082 Model Number: QEMU NVMe Ctrl 00:19:01.082 Firmware Version: 8.0.0 00:19:01.082 Recommended Arb Burst: 6 00:19:01.082 IEEE OUI Identifier: 00 54 52 00:19:01.082 Multi-path I/O 00:19:01.082 May have multiple subsystem ports: No 00:19:01.082 May have multiple controllers: Yes 00:19:01.082 Associated with SR-IOV VF: No 00:19:01.082 Max Data Transfer Size: 524288 00:19:01.082 Max Number of Namespaces: 256 00:19:01.082 Max Number of I/O Queues: 64 00:19:01.082 NVMe Specification Version (VS): 1.4 00:19:01.082 NVMe Specification Version (Identify): 1.4 00:19:01.082 Maximum Queue Entries: 2048 00:19:01.082 Contiguous Queues Required: Yes 00:19:01.082 Arbitration Mechanisms Supported 00:19:01.082 Weighted Round Robin: Not Supported 00:19:01.082 Vendor Specific: Not Supported 00:19:01.082 Reset Timeout: 7500 ms 00:19:01.082 Doorbell Stride: 4 bytes 00:19:01.082 NVM Subsystem Reset: Not Supported 00:19:01.082 Command Sets Supported 00:19:01.082 NVM Command Set: Supported 00:19:01.082 Boot Partition: Not Supported 00:19:01.082 Memory Page Size Minimum: 4096 bytes 00:19:01.082 Memory Page Size Maximum: 65536 bytes 00:19:01.082 Persistent Memory Region: Not Supported 00:19:01.082 Optional Asynchronous Events Supported 00:19:01.082 Namespace Attribute Notices: Supported 00:19:01.082 Firmware Activation Notices: Not Supported 00:19:01.082 ANA Change Notices: Not Supported 00:19:01.082 PLE Aggregate Log Change Notices: Not Supported 00:19:01.082 LBA Status Info Alert Notices: Not Supported 00:19:01.082 EGE Aggregate Log Change Notices: Not Supported 00:19:01.082 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.082 Zone Descriptor Change Notices: Not Supported 00:19:01.082 Discovery Log Change Notices: Not Supported 00:19:01.082 Controller Attributes 00:19:01.082 128-bit Host Identifier: Not Supported 00:19:01.082 Non-Operational Permissive Mode: Not Supported 00:19:01.082 NVM Sets: Not Supported 00:19:01.082 Read Recovery Levels: Not Supported 00:19:01.082 Endurance Groups: Supported 00:19:01.082 Predictable Latency Mode: Not Supported 00:19:01.082 Traffic Based Keep ALive: Not Supported 00:19:01.082 Namespace Granularity: Not Supported 00:19:01.082 SQ Associations: Not Supported 00:19:01.082 UUID List: Not Supported 00:19:01.082 Multi-Domain Subsystem: Not Supported 00:19:01.082 Fixed Capacity Management: Not Supported 00:19:01.082 Variable Capacity Management: Not Supported 00:19:01.082 Delete Endurance Group: Not Supported 00:19:01.082 Delete NVM Set: Not Supported 00:19:01.082 Extended LBA Formats Supported: Supported 00:19:01.082 Flexible Data Placement Supported: Supported 00:19:01.082 00:19:01.082 Controller Memory Buffer Support 00:19:01.082 ================================ 00:19:01.082 Supported: No 00:19:01.082 00:19:01.082 Persistent Memory Region Support 00:19:01.082 ================================ 00:19:01.082 Supported: No 00:19:01.082 00:19:01.082 Admin Command Set Attributes 00:19:01.082 ============================ 00:19:01.082 Security Send/Receive: Not Supported 00:19:01.082 Format NVM: Supported 00:19:01.082 Firmware Activate/Download: Not Supported 00:19:01.082 Namespace Management: Supported 00:19:01.082 Device Self-Test: Not Supported 00:19:01.082 Directives: Supported 00:19:01.082 NVMe-MI: Not Supported 00:19:01.082 Virtualization Management: Not Supported 00:19:01.082 Doorbell Buffer Config: Supported 00:19:01.082 Get LBA Status Capability: Not Supported 00:19:01.082 Command & Feature Lockdown Capability: Not Supported 00:19:01.082 Abort Command Limit: 4 00:19:01.082 Async Event Request Limit: 4 00:19:01.082 Number of Firmware Slots: N/A 00:19:01.082 Firmware Slot 1 Read-Only: N/A 00:19:01.082 Firmware Activation Without Reset: N/A 00:19:01.082 Multiple Update Detection Support: N/A 00:19:01.082 Firmware Update Granularity: No Information Provided 00:19:01.082 Per-Namespace SMART Log: Yes 00:19:01.082 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.082 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:01.082 Command Effects Log Page: Supported 00:19:01.082 Get Log Page Extended Data: Supported 00:19:01.082 Telemetry Log Pages: Not Supported 00:19:01.082 Persistent Event Log Pages: Not Supported 00:19:01.082 Supported Log Pages Log Page: May Support 00:19:01.082 Commands Supported & Effects Log Page: Not Supported 00:19:01.082 Feature Identifiers & Effects Log Page:May Support 00:19:01.082 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.082 Data Area 4 for Telemetry Log: Not Supported 00:19:01.082 Error Log Page Entries Supported: 1 00:19:01.082 Keep Alive: Not Supported 00:19:01.082 00:19:01.082 NVM Command Set Attributes 00:19:01.082 ========================== 00:19:01.082 Submission Queue Entry Size 00:19:01.082 Max: 64 00:19:01.082 Min: 64 00:19:01.082 Completion Queue Entry Size 00:19:01.082 Max: 16 00:19:01.082 Min: 16 00:19:01.082 Number of Namespaces: 256 00:19:01.082 Compare Command: Supported 00:19:01.082 Write Uncorrectable Command: Not Supported 00:19:01.082 Dataset Management Command: Supported 00:19:01.082 Write Zeroes Command: Supported 00:19:01.082 Set Features Save Field: Supported 00:19:01.082 Reservations: Not Supported 00:19:01.082 Timestamp: Supported 00:19:01.082 Copy: Supported 00:19:01.082 Volatile Write Cache: Present 00:19:01.082 Atomic Write Unit (Normal): 1 00:19:01.082 Atomic Write Unit (PFail): 1 00:19:01.082 Atomic Compare & Write Unit: 1 00:19:01.082 Fused Compare & Write: Not Supported 00:19:01.082 Scatter-Gather List 00:19:01.082 SGL Command Set: Supported 00:19:01.082 SGL Keyed: Not Supported 00:19:01.082 SGL Bit Bucket Descriptor: Not Supported 00:19:01.082 SGL Metadata Pointer: Not Supported 00:19:01.082 Oversized SGL: Not Supported 00:19:01.082 SGL Metadata Address: Not Supported 00:19:01.082 SGL Offset: Not Supported 00:19:01.082 Transport SGL Data Block: Not Supported 00:19:01.082 Replay Protected Memory Block: Not Supported 00:19:01.082 00:19:01.082 Firmware Slot Information 00:19:01.082 ========================= 00:19:01.082 Active slot: 1 00:19:01.082 Slot 1 Firmware Revision: 1.0 00:19:01.082 00:19:01.082 00:19:01.082 Commands Supported and Effects 00:19:01.082 ============================== 00:19:01.082 Admin Commands 00:19:01.082 -------------- 00:19:01.082 Delete I/O Submission Queue (00h): Supported 00:19:01.082 Create I/O Submission Queue (01h): Supported 00:19:01.082 Get Log Page (02h): Supported 00:19:01.082 Delete I/O Completion Queue (04h): Supported 00:19:01.082 Create I/O Completion Queue (05h): Supported 00:19:01.082 Identify (06h): Supported 00:19:01.082 Abort (08h): Supported 00:19:01.083 Set Features (09h): Supported 00:19:01.083 Get Features (0Ah): Supported 00:19:01.083 Asynchronous Event Request (0Ch): Supported 00:19:01.083 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.083 Directive Send (19h): Supported 00:19:01.083 Directive Receive (1Ah): Supported 00:19:01.083 Virtualization Management (1Ch): Supported 00:19:01.083 Doorbell Buffer Config (7Ch): Supported 00:19:01.083 Format NVM (80h): Supported LBA-Change 00:19:01.083 I/O Commands 00:19:01.083 ------------ 00:19:01.083 Flush (00h): Supported LBA-Change 00:19:01.083 Write (01h): Supported LBA-Change 00:19:01.083 Read (02h): Supported 00:19:01.083 Compare (05h): Supported 00:19:01.083 Write Zeroes (08h): Supported LBA-Change 00:19:01.083 Dataset Management (09h): Supported LBA-Change 00:19:01.083 Unknown (0Ch): Supported 00:19:01.083 Unknown (12h): Supported 00:19:01.083 Copy (19h): Supported LBA-Change 00:19:01.083 Unknown (1Dh): Supported LBA-Change 00:19:01.083 00:19:01.083 Error Log 00:19:01.083 ========= 00:19:01.083 00:19:01.083 Arbitration 00:19:01.083 =========== 00:19:01.083 Arbitration Burst: no limit 00:19:01.083 00:19:01.083 Power Management 00:19:01.083 ================ 00:19:01.083 Number of Power States: 1 00:19:01.083 Current Power State: Power State #0 00:19:01.083 Power State #0: 00:19:01.083 Max Power: 25.00 W 00:19:01.083 Non-Operational State: Operational 00:19:01.083 Entry Latency: 16 microseconds 00:19:01.083 Exit Latency: 4 microseconds 00:19:01.083 Relative Read Throughput: 0 00:19:01.083 Relative Read Latency: 0 00:19:01.083 Relative Write Throughput: 0 00:19:01.083 Relative Write Latency: 0 00:19:01.083 Idle Power: Not Reported 00:19:01.083 Active Power: Not Reported 00:19:01.083 Non-Operational Permissive Mode: Not Supported 00:19:01.083 00:19:01.083 Health Information 00:19:01.083 ================== 00:19:01.083 Critical Warnings: 00:19:01.083 Available Spare Space: OK 00:19:01.083 Temperature: OK 00:19:01.083 Device Reliability: OK 00:19:01.083 Read Only: No 00:19:01.083 Volatile Memory Backup: OK 00:19:01.083 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.083 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.083 Available Spare: 0% 00:19:01.083 Available Spare Threshold: 0% 00:19:01.083 Life Percentage Used: [2024-11-27 04:41:08.197576] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62925 terminated unexpected 00:19:01.083 [2024-11-27 04:41:08.198578] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62925 terminated unexpected 00:19:01.083 [2024-11-27 04:41:08.199092] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62925 terminated unexpected 00:19:01.083 [2024-11-27 04:41:08.199872] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62925 terminated unexpected 00:19:01.083 0% 00:19:01.083 Data Units Read: 752 00:19:01.083 Data Units Written: 681 00:19:01.083 Host Read Commands: 33134 00:19:01.083 Host Write Commands: 32557 00:19:01.083 Controller Busy Time: 0 minutes 00:19:01.083 Power Cycles: 0 00:19:01.083 Power On Hours: 0 hours 00:19:01.083 Unsafe Shutdowns: 0 00:19:01.083 Unrecoverable Media Errors: 0 00:19:01.083 Lifetime Error Log Entries: 0 00:19:01.083 Warning Temperature Time: 0 minutes 00:19:01.083 Critical Temperature Time: 0 minutes 00:19:01.083 00:19:01.083 Number of Queues 00:19:01.083 ================ 00:19:01.083 Number of I/O Submission Queues: 64 00:19:01.083 Number of I/O Completion Queues: 64 00:19:01.083 00:19:01.083 ZNS Specific Controller Data 00:19:01.083 ============================ 00:19:01.083 Zone Append Size Limit: 0 00:19:01.083 00:19:01.083 00:19:01.083 Active Namespaces 00:19:01.083 ================= 00:19:01.083 Namespace ID:1 00:19:01.083 Error Recovery Timeout: Unlimited 00:19:01.083 Command Set Identifier: NVM (00h) 00:19:01.083 Deallocate: Supported 00:19:01.083 Deallocated/Unwritten Error: Supported 00:19:01.083 Deallocated Read Value: All 0x00 00:19:01.083 Deallocate in Write Zeroes: Not Supported 00:19:01.083 Deallocated Guard Field: 0xFFFF 00:19:01.083 Flush: Supported 00:19:01.083 Reservation: Not Supported 00:19:01.083 Namespace Sharing Capabilities: Multiple Controllers 00:19:01.083 Size (in LBAs): 262144 (1GiB) 00:19:01.083 Capacity (in LBAs): 262144 (1GiB) 00:19:01.083 Utilization (in LBAs): 262144 (1GiB) 00:19:01.083 Thin Provisioning: Not Supported 00:19:01.083 Per-NS Atomic Units: No 00:19:01.083 Maximum Single Source Range Length: 128 00:19:01.083 Maximum Copy Length: 128 00:19:01.083 Maximum Source Range Count: 128 00:19:01.083 NGUID/EUI64 Never Reused: No 00:19:01.083 Namespace Write Protected: No 00:19:01.083 Endurance group ID: 1 00:19:01.083 Number of LBA Formats: 8 00:19:01.083 Current LBA Format: LBA Format #04 00:19:01.083 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.083 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.083 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.083 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.083 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.083 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.083 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.083 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.083 00:19:01.083 Get Feature FDP: 00:19:01.083 ================ 00:19:01.083 Enabled: Yes 00:19:01.083 FDP configuration index: 0 00:19:01.083 00:19:01.083 FDP configurations log page 00:19:01.083 =========================== 00:19:01.083 Number of FDP configurations: 1 00:19:01.083 Version: 0 00:19:01.083 Size: 112 00:19:01.083 FDP Configuration Descriptor: 0 00:19:01.083 Descriptor Size: 96 00:19:01.083 Reclaim Group Identifier format: 2 00:19:01.083 FDP Volatile Write Cache: Not Present 00:19:01.083 FDP Configuration: Valid 00:19:01.083 Vendor Specific Size: 0 00:19:01.083 Number of Reclaim Groups: 2 00:19:01.083 Number of Recalim Unit Handles: 8 00:19:01.083 Max Placement Identifiers: 128 00:19:01.083 Number of Namespaces Suppprted: 256 00:19:01.083 Reclaim unit Nominal Size: 6000000 bytes 00:19:01.083 Estimated Reclaim Unit Time Limit: Not Reported 00:19:01.083 RUH Desc #000: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #001: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #002: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #003: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #004: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #005: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #006: RUH Type: Initially Isolated 00:19:01.083 RUH Desc #007: RUH Type: Initially Isolated 00:19:01.083 00:19:01.083 FDP reclaim unit handle usage log page 00:19:01.083 ====================================== 00:19:01.083 Number of Reclaim Unit Handles: 8 00:19:01.083 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:01.083 RUH Usage Desc #001: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #002: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #003: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #004: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #005: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #006: RUH Attributes: Unused 00:19:01.083 RUH Usage Desc #007: RUH Attributes: Unused 00:19:01.083 00:19:01.083 FDP statistics log page 00:19:01.083 ======================= 00:19:01.083 Host bytes with metadata written: 429826048 00:19:01.083 Media bytes with metadata written: 429871104 00:19:01.083 Media bytes erased: 0 00:19:01.083 00:19:01.083 FDP events log page 00:19:01.083 =================== 00:19:01.083 Number of FDP events: 0 00:19:01.083 00:19:01.083 NVM Specific Namespace Data 00:19:01.083 =========================== 00:19:01.083 Logical Block Storage Tag Mask: 0 00:19:01.083 Protection Information Capabilities: 00:19:01.083 16b Guard Protection Information Storage Tag Support: No 00:19:01.083 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.083 Storage Tag Check Read Support: No 00:19:01.083 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.083 ===================================================== 00:19:01.083 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:01.083 ===================================================== 00:19:01.083 Controller Capabilities/Features 00:19:01.083 ================================ 00:19:01.084 Vendor ID: 1b36 00:19:01.084 Subsystem Vendor ID: 1af4 00:19:01.084 Serial Number: 12342 00:19:01.084 Model Number: QEMU NVMe Ctrl 00:19:01.084 Firmware Version: 8.0.0 00:19:01.084 Recommended Arb Burst: 6 00:19:01.084 IEEE OUI Identifier: 00 54 52 00:19:01.084 Multi-path I/O 00:19:01.084 May have multiple subsystem ports: No 00:19:01.084 May have multiple controllers: No 00:19:01.084 Associated with SR-IOV VF: No 00:19:01.084 Max Data Transfer Size: 524288 00:19:01.084 Max Number of Namespaces: 256 00:19:01.084 Max Number of I/O Queues: 64 00:19:01.084 NVMe Specification Version (VS): 1.4 00:19:01.084 NVMe Specification Version (Identify): 1.4 00:19:01.084 Maximum Queue Entries: 2048 00:19:01.084 Contiguous Queues Required: Yes 00:19:01.084 Arbitration Mechanisms Supported 00:19:01.084 Weighted Round Robin: Not Supported 00:19:01.084 Vendor Specific: Not Supported 00:19:01.084 Reset Timeout: 7500 ms 00:19:01.084 Doorbell Stride: 4 bytes 00:19:01.084 NVM Subsystem Reset: Not Supported 00:19:01.084 Command Sets Supported 00:19:01.084 NVM Command Set: Supported 00:19:01.084 Boot Partition: Not Supported 00:19:01.084 Memory Page Size Minimum: 4096 bytes 00:19:01.084 Memory Page Size Maximum: 65536 bytes 00:19:01.084 Persistent Memory Region: Not Supported 00:19:01.084 Optional Asynchronous Events Supported 00:19:01.084 Namespace Attribute Notices: Supported 00:19:01.084 Firmware Activation Notices: Not Supported 00:19:01.084 ANA Change Notices: Not Supported 00:19:01.084 PLE Aggregate Log Change Notices: Not Supported 00:19:01.084 LBA Status Info Alert Notices: Not Supported 00:19:01.084 EGE Aggregate Log Change Notices: Not Supported 00:19:01.084 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.084 Zone Descriptor Change Notices: Not Supported 00:19:01.084 Discovery Log Change Notices: Not Supported 00:19:01.084 Controller Attributes 00:19:01.084 128-bit Host Identifier: Not Supported 00:19:01.084 Non-Operational Permissive Mode: Not Supported 00:19:01.084 NVM Sets: Not Supported 00:19:01.084 Read Recovery Levels: Not Supported 00:19:01.084 Endurance Groups: Not Supported 00:19:01.084 Predictable Latency Mode: Not Supported 00:19:01.084 Traffic Based Keep ALive: Not Supported 00:19:01.084 Namespace Granularity: Not Supported 00:19:01.084 SQ Associations: Not Supported 00:19:01.084 UUID List: Not Supported 00:19:01.084 Multi-Domain Subsystem: Not Supported 00:19:01.084 Fixed Capacity Management: Not Supported 00:19:01.084 Variable Capacity Management: Not Supported 00:19:01.084 Delete Endurance Group: Not Supported 00:19:01.084 Delete NVM Set: Not Supported 00:19:01.084 Extended LBA Formats Supported: Supported 00:19:01.084 Flexible Data Placement Supported: Not Supported 00:19:01.084 00:19:01.084 Controller Memory Buffer Support 00:19:01.084 ================================ 00:19:01.084 Supported: No 00:19:01.084 00:19:01.084 Persistent Memory Region Support 00:19:01.084 ================================ 00:19:01.084 Supported: No 00:19:01.084 00:19:01.084 Admin Command Set Attributes 00:19:01.084 ============================ 00:19:01.084 Security Send/Receive: Not Supported 00:19:01.084 Format NVM: Supported 00:19:01.084 Firmware Activate/Download: Not Supported 00:19:01.084 Namespace Management: Supported 00:19:01.084 Device Self-Test: Not Supported 00:19:01.084 Directives: Supported 00:19:01.084 NVMe-MI: Not Supported 00:19:01.084 Virtualization Management: Not Supported 00:19:01.084 Doorbell Buffer Config: Supported 00:19:01.084 Get LBA Status Capability: Not Supported 00:19:01.084 Command & Feature Lockdown Capability: Not Supported 00:19:01.084 Abort Command Limit: 4 00:19:01.084 Async Event Request Limit: 4 00:19:01.084 Number of Firmware Slots: N/A 00:19:01.084 Firmware Slot 1 Read-Only: N/A 00:19:01.084 Firmware Activation Without Reset: N/A 00:19:01.084 Multiple Update Detection Support: N/A 00:19:01.084 Firmware Update Granularity: No Information Provided 00:19:01.084 Per-Namespace SMART Log: Yes 00:19:01.084 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.084 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:01.084 Command Effects Log Page: Supported 00:19:01.084 Get Log Page Extended Data: Supported 00:19:01.084 Telemetry Log Pages: Not Supported 00:19:01.084 Persistent Event Log Pages: Not Supported 00:19:01.084 Supported Log Pages Log Page: May Support 00:19:01.084 Commands Supported & Effects Log Page: Not Supported 00:19:01.084 Feature Identifiers & Effects Log Page:May Support 00:19:01.084 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.084 Data Area 4 for Telemetry Log: Not Supported 00:19:01.084 Error Log Page Entries Supported: 1 00:19:01.084 Keep Alive: Not Supported 00:19:01.084 00:19:01.084 NVM Command Set Attributes 00:19:01.084 ========================== 00:19:01.084 Submission Queue Entry Size 00:19:01.084 Max: 64 00:19:01.084 Min: 64 00:19:01.084 Completion Queue Entry Size 00:19:01.084 Max: 16 00:19:01.084 Min: 16 00:19:01.084 Number of Namespaces: 256 00:19:01.084 Compare Command: Supported 00:19:01.084 Write Uncorrectable Command: Not Supported 00:19:01.084 Dataset Management Command: Supported 00:19:01.084 Write Zeroes Command: Supported 00:19:01.084 Set Features Save Field: Supported 00:19:01.084 Reservations: Not Supported 00:19:01.084 Timestamp: Supported 00:19:01.084 Copy: Supported 00:19:01.084 Volatile Write Cache: Present 00:19:01.084 Atomic Write Unit (Normal): 1 00:19:01.084 Atomic Write Unit (PFail): 1 00:19:01.084 Atomic Compare & Write Unit: 1 00:19:01.084 Fused Compare & Write: Not Supported 00:19:01.084 Scatter-Gather List 00:19:01.084 SGL Command Set: Supported 00:19:01.084 SGL Keyed: Not Supported 00:19:01.084 SGL Bit Bucket Descriptor: Not Supported 00:19:01.084 SGL Metadata Pointer: Not Supported 00:19:01.084 Oversized SGL: Not Supported 00:19:01.084 SGL Metadata Address: Not Supported 00:19:01.084 SGL Offset: Not Supported 00:19:01.084 Transport SGL Data Block: Not Supported 00:19:01.084 Replay Protected Memory Block: Not Supported 00:19:01.084 00:19:01.084 Firmware Slot Information 00:19:01.084 ========================= 00:19:01.084 Active slot: 1 00:19:01.084 Slot 1 Firmware Revision: 1.0 00:19:01.084 00:19:01.084 00:19:01.084 Commands Supported and Effects 00:19:01.084 ============================== 00:19:01.084 Admin Commands 00:19:01.084 -------------- 00:19:01.084 Delete I/O Submission Queue (00h): Supported 00:19:01.084 Create I/O Submission Queue (01h): Supported 00:19:01.084 Get Log Page (02h): Supported 00:19:01.084 Delete I/O Completion Queue (04h): Supported 00:19:01.084 Create I/O Completion Queue (05h): Supported 00:19:01.084 Identify (06h): Supported 00:19:01.084 Abort (08h): Supported 00:19:01.084 Set Features (09h): Supported 00:19:01.084 Get Features (0Ah): Supported 00:19:01.084 Asynchronous Event Request (0Ch): Supported 00:19:01.084 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.084 Directive Send (19h): Supported 00:19:01.084 Directive Receive (1Ah): Supported 00:19:01.084 Virtualization Management (1Ch): Supported 00:19:01.084 Doorbell Buffer Config (7Ch): Supported 00:19:01.084 Format NVM (80h): Supported LBA-Change 00:19:01.084 I/O Commands 00:19:01.084 ------------ 00:19:01.084 Flush (00h): Supported LBA-Change 00:19:01.084 Write (01h): Supported LBA-Change 00:19:01.085 Read (02h): Supported 00:19:01.085 Compare (05h): Supported 00:19:01.085 Write Zeroes (08h): Supported LBA-Change 00:19:01.085 Dataset Management (09h): Supported LBA-Change 00:19:01.085 Unknown (0Ch): Supported 00:19:01.085 Unknown (12h): Supported 00:19:01.085 Copy (19h): Supported LBA-Change 00:19:01.085 Unknown (1Dh): Supported LBA-Change 00:19:01.085 00:19:01.085 Error Log 00:19:01.085 ========= 00:19:01.085 00:19:01.085 Arbitration 00:19:01.085 =========== 00:19:01.085 Arbitration Burst: no limit 00:19:01.085 00:19:01.085 Power Management 00:19:01.085 ================ 00:19:01.085 Number of Power States: 1 00:19:01.085 Current Power State: Power State #0 00:19:01.085 Power State #0: 00:19:01.085 Max Power: 25.00 W 00:19:01.085 Non-Operational State: Operational 00:19:01.085 Entry Latency: 16 microseconds 00:19:01.085 Exit Latency: 4 microseconds 00:19:01.085 Relative Read Throughput: 0 00:19:01.085 Relative Read Latency: 0 00:19:01.085 Relative Write Throughput: 0 00:19:01.085 Relative Write Latency: 0 00:19:01.085 Idle Power: Not Reported 00:19:01.085 Active Power: Not Reported 00:19:01.085 Non-Operational Permissive Mode: Not Supported 00:19:01.085 00:19:01.085 Health Information 00:19:01.085 ================== 00:19:01.085 Critical Warnings: 00:19:01.085 Available Spare Space: OK 00:19:01.085 Temperature: OK 00:19:01.085 Device Reliability: OK 00:19:01.085 Read Only: No 00:19:01.085 Volatile Memory Backup: OK 00:19:01.085 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.085 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.085 Available Spare: 0% 00:19:01.085 Available Spare Threshold: 0% 00:19:01.085 Life Percentage Used: 0% 00:19:01.085 Data Units Read: 1962 00:19:01.085 Data Units Written: 1749 00:19:01.085 Host Read Commands: 96629 00:19:01.085 Host Write Commands: 94898 00:19:01.085 Controller Busy Time: 0 minutes 00:19:01.085 Power Cycles: 0 00:19:01.085 Power On Hours: 0 hours 00:19:01.085 Unsafe Shutdowns: 0 00:19:01.085 Unrecoverable Media Errors: 0 00:19:01.085 Lifetime Error Log Entries: 0 00:19:01.085 Warning Temperature Time: 0 minutes 00:19:01.085 Critical Temperature Time: 0 minutes 00:19:01.085 00:19:01.085 Number of Queues 00:19:01.085 ================ 00:19:01.085 Number of I/O Submission Queues: 64 00:19:01.085 Number of I/O Completion Queues: 64 00:19:01.085 00:19:01.085 ZNS Specific Controller Data 00:19:01.085 ============================ 00:19:01.085 Zone Append Size Limit: 0 00:19:01.085 00:19:01.085 00:19:01.085 Active Namespaces 00:19:01.085 ================= 00:19:01.085 Namespace ID:1 00:19:01.085 Error Recovery Timeout: Unlimited 00:19:01.085 Command Set Identifier: NVM (00h) 00:19:01.085 Deallocate: Supported 00:19:01.085 Deallocated/Unwritten Error: Supported 00:19:01.085 Deallocated Read Value: All 0x00 00:19:01.085 Deallocate in Write Zeroes: Not Supported 00:19:01.085 Deallocated Guard Field: 0xFFFF 00:19:01.085 Flush: Supported 00:19:01.085 Reservation: Not Supported 00:19:01.085 Namespace Sharing Capabilities: Private 00:19:01.085 Size (in LBAs): 1048576 (4GiB) 00:19:01.085 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.085 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.085 Thin Provisioning: Not Supported 00:19:01.085 Per-NS Atomic Units: No 00:19:01.085 Maximum Single Source Range Length: 128 00:19:01.085 Maximum Copy Length: 128 00:19:01.085 Maximum Source Range Count: 128 00:19:01.085 NGUID/EUI64 Never Reused: No 00:19:01.085 Namespace Write Protected: No 00:19:01.085 Number of LBA Formats: 8 00:19:01.085 Current LBA Format: LBA Format #04 00:19:01.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.085 00:19:01.085 NVM Specific Namespace Data 00:19:01.085 =========================== 00:19:01.085 Logical Block Storage Tag Mask: 0 00:19:01.085 Protection Information Capabilities: 00:19:01.085 16b Guard Protection Information Storage Tag Support: No 00:19:01.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.085 Storage Tag Check Read Support: No 00:19:01.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Namespace ID:2 00:19:01.085 Error Recovery Timeout: Unlimited 00:19:01.085 Command Set Identifier: NVM (00h) 00:19:01.085 Deallocate: Supported 00:19:01.085 Deallocated/Unwritten Error: Supported 00:19:01.085 Deallocated Read Value: All 0x00 00:19:01.085 Deallocate in Write Zeroes: Not Supported 00:19:01.085 Deallocated Guard Field: 0xFFFF 00:19:01.085 Flush: Supported 00:19:01.085 Reservation: Not Supported 00:19:01.085 Namespace Sharing Capabilities: Private 00:19:01.085 Size (in LBAs): 1048576 (4GiB) 00:19:01.085 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.085 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.085 Thin Provisioning: Not Supported 00:19:01.085 Per-NS Atomic Units: No 00:19:01.085 Maximum Single Source Range Length: 128 00:19:01.085 Maximum Copy Length: 128 00:19:01.085 Maximum Source Range Count: 128 00:19:01.085 NGUID/EUI64 Never Reused: No 00:19:01.085 Namespace Write Protected: No 00:19:01.085 Number of LBA Formats: 8 00:19:01.085 Current LBA Format: LBA Format #04 00:19:01.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.085 00:19:01.085 NVM Specific Namespace Data 00:19:01.085 =========================== 00:19:01.085 Logical Block Storage Tag Mask: 0 00:19:01.085 Protection Information Capabilities: 00:19:01.085 16b Guard Protection Information Storage Tag Support: No 00:19:01.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.085 Storage Tag Check Read Support: No 00:19:01.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.085 Namespace ID:3 00:19:01.085 Error Recovery Timeout: Unlimited 00:19:01.085 Command Set Identifier: NVM (00h) 00:19:01.085 Deallocate: Supported 00:19:01.085 Deallocated/Unwritten Error: Supported 00:19:01.085 Deallocated Read Value: All 0x00 00:19:01.085 Deallocate in Write Zeroes: Not Supported 00:19:01.085 Deallocated Guard Field: 0xFFFF 00:19:01.085 Flush: Supported 00:19:01.085 Reservation: Not Supported 00:19:01.085 Namespace Sharing Capabilities: Private 00:19:01.085 Size (in LBAs): 1048576 (4GiB) 00:19:01.085 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.085 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.085 Thin Provisioning: Not Supported 00:19:01.085 Per-NS Atomic Units: No 00:19:01.085 Maximum Single Source Range Length: 128 00:19:01.085 Maximum Copy Length: 128 00:19:01.085 Maximum Source Range Count: 128 00:19:01.085 NGUID/EUI64 Never Reused: No 00:19:01.085 Namespace Write Protected: No 00:19:01.085 Number of LBA Formats: 8 00:19:01.085 Current LBA Format: LBA Format #04 00:19:01.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.086 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.086 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.086 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.086 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.086 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.086 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.086 00:19:01.086 NVM Specific Namespace Data 00:19:01.086 =========================== 00:19:01.086 Logical Block Storage Tag Mask: 0 00:19:01.086 Protection Information Capabilities: 00:19:01.086 16b Guard Protection Information Storage Tag Support: No 00:19:01.086 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.086 Storage Tag Check Read Support: No 00:19:01.086 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.086 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:01.086 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:01.345 ===================================================== 00:19:01.345 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:01.345 ===================================================== 00:19:01.345 Controller Capabilities/Features 00:19:01.345 ================================ 00:19:01.345 Vendor ID: 1b36 00:19:01.345 Subsystem Vendor ID: 1af4 00:19:01.345 Serial Number: 12340 00:19:01.345 Model Number: QEMU NVMe Ctrl 00:19:01.345 Firmware Version: 8.0.0 00:19:01.345 Recommended Arb Burst: 6 00:19:01.345 IEEE OUI Identifier: 00 54 52 00:19:01.345 Multi-path I/O 00:19:01.345 May have multiple subsystem ports: No 00:19:01.345 May have multiple controllers: No 00:19:01.345 Associated with SR-IOV VF: No 00:19:01.345 Max Data Transfer Size: 524288 00:19:01.345 Max Number of Namespaces: 256 00:19:01.345 Max Number of I/O Queues: 64 00:19:01.345 NVMe Specification Version (VS): 1.4 00:19:01.345 NVMe Specification Version (Identify): 1.4 00:19:01.345 Maximum Queue Entries: 2048 00:19:01.345 Contiguous Queues Required: Yes 00:19:01.345 Arbitration Mechanisms Supported 00:19:01.345 Weighted Round Robin: Not Supported 00:19:01.345 Vendor Specific: Not Supported 00:19:01.345 Reset Timeout: 7500 ms 00:19:01.345 Doorbell Stride: 4 bytes 00:19:01.345 NVM Subsystem Reset: Not Supported 00:19:01.345 Command Sets Supported 00:19:01.345 NVM Command Set: Supported 00:19:01.345 Boot Partition: Not Supported 00:19:01.345 Memory Page Size Minimum: 4096 bytes 00:19:01.345 Memory Page Size Maximum: 65536 bytes 00:19:01.345 Persistent Memory Region: Not Supported 00:19:01.345 Optional Asynchronous Events Supported 00:19:01.345 Namespace Attribute Notices: Supported 00:19:01.345 Firmware Activation Notices: Not Supported 00:19:01.345 ANA Change Notices: Not Supported 00:19:01.345 PLE Aggregate Log Change Notices: Not Supported 00:19:01.345 LBA Status Info Alert Notices: Not Supported 00:19:01.345 EGE Aggregate Log Change Notices: Not Supported 00:19:01.345 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.345 Zone Descriptor Change Notices: Not Supported 00:19:01.345 Discovery Log Change Notices: Not Supported 00:19:01.345 Controller Attributes 00:19:01.345 128-bit Host Identifier: Not Supported 00:19:01.345 Non-Operational Permissive Mode: Not Supported 00:19:01.345 NVM Sets: Not Supported 00:19:01.345 Read Recovery Levels: Not Supported 00:19:01.345 Endurance Groups: Not Supported 00:19:01.345 Predictable Latency Mode: Not Supported 00:19:01.345 Traffic Based Keep ALive: Not Supported 00:19:01.345 Namespace Granularity: Not Supported 00:19:01.345 SQ Associations: Not Supported 00:19:01.345 UUID List: Not Supported 00:19:01.345 Multi-Domain Subsystem: Not Supported 00:19:01.345 Fixed Capacity Management: Not Supported 00:19:01.345 Variable Capacity Management: Not Supported 00:19:01.345 Delete Endurance Group: Not Supported 00:19:01.345 Delete NVM Set: Not Supported 00:19:01.345 Extended LBA Formats Supported: Supported 00:19:01.345 Flexible Data Placement Supported: Not Supported 00:19:01.345 00:19:01.345 Controller Memory Buffer Support 00:19:01.345 ================================ 00:19:01.345 Supported: No 00:19:01.345 00:19:01.345 Persistent Memory Region Support 00:19:01.345 ================================ 00:19:01.345 Supported: No 00:19:01.345 00:19:01.345 Admin Command Set Attributes 00:19:01.345 ============================ 00:19:01.345 Security Send/Receive: Not Supported 00:19:01.345 Format NVM: Supported 00:19:01.345 Firmware Activate/Download: Not Supported 00:19:01.345 Namespace Management: Supported 00:19:01.346 Device Self-Test: Not Supported 00:19:01.346 Directives: Supported 00:19:01.346 NVMe-MI: Not Supported 00:19:01.346 Virtualization Management: Not Supported 00:19:01.346 Doorbell Buffer Config: Supported 00:19:01.346 Get LBA Status Capability: Not Supported 00:19:01.346 Command & Feature Lockdown Capability: Not Supported 00:19:01.346 Abort Command Limit: 4 00:19:01.346 Async Event Request Limit: 4 00:19:01.346 Number of Firmware Slots: N/A 00:19:01.346 Firmware Slot 1 Read-Only: N/A 00:19:01.346 Firmware Activation Without Reset: N/A 00:19:01.346 Multiple Update Detection Support: N/A 00:19:01.346 Firmware Update Granularity: No Information Provided 00:19:01.346 Per-Namespace SMART Log: Yes 00:19:01.346 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.346 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:01.346 Command Effects Log Page: Supported 00:19:01.346 Get Log Page Extended Data: Supported 00:19:01.346 Telemetry Log Pages: Not Supported 00:19:01.346 Persistent Event Log Pages: Not Supported 00:19:01.346 Supported Log Pages Log Page: May Support 00:19:01.346 Commands Supported & Effects Log Page: Not Supported 00:19:01.346 Feature Identifiers & Effects Log Page:May Support 00:19:01.346 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.346 Data Area 4 for Telemetry Log: Not Supported 00:19:01.346 Error Log Page Entries Supported: 1 00:19:01.346 Keep Alive: Not Supported 00:19:01.346 00:19:01.346 NVM Command Set Attributes 00:19:01.346 ========================== 00:19:01.346 Submission Queue Entry Size 00:19:01.346 Max: 64 00:19:01.346 Min: 64 00:19:01.346 Completion Queue Entry Size 00:19:01.346 Max: 16 00:19:01.346 Min: 16 00:19:01.346 Number of Namespaces: 256 00:19:01.346 Compare Command: Supported 00:19:01.346 Write Uncorrectable Command: Not Supported 00:19:01.346 Dataset Management Command: Supported 00:19:01.346 Write Zeroes Command: Supported 00:19:01.346 Set Features Save Field: Supported 00:19:01.346 Reservations: Not Supported 00:19:01.346 Timestamp: Supported 00:19:01.346 Copy: Supported 00:19:01.346 Volatile Write Cache: Present 00:19:01.346 Atomic Write Unit (Normal): 1 00:19:01.346 Atomic Write Unit (PFail): 1 00:19:01.346 Atomic Compare & Write Unit: 1 00:19:01.346 Fused Compare & Write: Not Supported 00:19:01.346 Scatter-Gather List 00:19:01.346 SGL Command Set: Supported 00:19:01.346 SGL Keyed: Not Supported 00:19:01.346 SGL Bit Bucket Descriptor: Not Supported 00:19:01.346 SGL Metadata Pointer: Not Supported 00:19:01.346 Oversized SGL: Not Supported 00:19:01.346 SGL Metadata Address: Not Supported 00:19:01.346 SGL Offset: Not Supported 00:19:01.346 Transport SGL Data Block: Not Supported 00:19:01.346 Replay Protected Memory Block: Not Supported 00:19:01.346 00:19:01.346 Firmware Slot Information 00:19:01.346 ========================= 00:19:01.346 Active slot: 1 00:19:01.346 Slot 1 Firmware Revision: 1.0 00:19:01.346 00:19:01.346 00:19:01.346 Commands Supported and Effects 00:19:01.346 ============================== 00:19:01.346 Admin Commands 00:19:01.346 -------------- 00:19:01.346 Delete I/O Submission Queue (00h): Supported 00:19:01.346 Create I/O Submission Queue (01h): Supported 00:19:01.346 Get Log Page (02h): Supported 00:19:01.346 Delete I/O Completion Queue (04h): Supported 00:19:01.346 Create I/O Completion Queue (05h): Supported 00:19:01.346 Identify (06h): Supported 00:19:01.346 Abort (08h): Supported 00:19:01.346 Set Features (09h): Supported 00:19:01.346 Get Features (0Ah): Supported 00:19:01.346 Asynchronous Event Request (0Ch): Supported 00:19:01.346 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.346 Directive Send (19h): Supported 00:19:01.346 Directive Receive (1Ah): Supported 00:19:01.346 Virtualization Management (1Ch): Supported 00:19:01.346 Doorbell Buffer Config (7Ch): Supported 00:19:01.346 Format NVM (80h): Supported LBA-Change 00:19:01.346 I/O Commands 00:19:01.346 ------------ 00:19:01.346 Flush (00h): Supported LBA-Change 00:19:01.346 Write (01h): Supported LBA-Change 00:19:01.346 Read (02h): Supported 00:19:01.346 Compare (05h): Supported 00:19:01.346 Write Zeroes (08h): Supported LBA-Change 00:19:01.346 Dataset Management (09h): Supported LBA-Change 00:19:01.346 Unknown (0Ch): Supported 00:19:01.346 Unknown (12h): Supported 00:19:01.346 Copy (19h): Supported LBA-Change 00:19:01.346 Unknown (1Dh): Supported LBA-Change 00:19:01.346 00:19:01.346 Error Log 00:19:01.346 ========= 00:19:01.346 00:19:01.346 Arbitration 00:19:01.346 =========== 00:19:01.346 Arbitration Burst: no limit 00:19:01.346 00:19:01.346 Power Management 00:19:01.346 ================ 00:19:01.346 Number of Power States: 1 00:19:01.346 Current Power State: Power State #0 00:19:01.346 Power State #0: 00:19:01.346 Max Power: 25.00 W 00:19:01.346 Non-Operational State: Operational 00:19:01.346 Entry Latency: 16 microseconds 00:19:01.346 Exit Latency: 4 microseconds 00:19:01.346 Relative Read Throughput: 0 00:19:01.346 Relative Read Latency: 0 00:19:01.346 Relative Write Throughput: 0 00:19:01.346 Relative Write Latency: 0 00:19:01.346 Idle Power: Not Reported 00:19:01.346 Active Power: Not Reported 00:19:01.346 Non-Operational Permissive Mode: Not Supported 00:19:01.346 00:19:01.346 Health Information 00:19:01.346 ================== 00:19:01.346 Critical Warnings: 00:19:01.346 Available Spare Space: OK 00:19:01.346 Temperature: OK 00:19:01.346 Device Reliability: OK 00:19:01.346 Read Only: No 00:19:01.346 Volatile Memory Backup: OK 00:19:01.346 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.346 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.346 Available Spare: 0% 00:19:01.346 Available Spare Threshold: 0% 00:19:01.346 Life Percentage Used: 0% 00:19:01.346 Data Units Read: 597 00:19:01.346 Data Units Written: 525 00:19:01.346 Host Read Commands: 31381 00:19:01.346 Host Write Commands: 31167 00:19:01.346 Controller Busy Time: 0 minutes 00:19:01.346 Power Cycles: 0 00:19:01.346 Power On Hours: 0 hours 00:19:01.346 Unsafe Shutdowns: 0 00:19:01.346 Unrecoverable Media Errors: 0 00:19:01.346 Lifetime Error Log Entries: 0 00:19:01.346 Warning Temperature Time: 0 minutes 00:19:01.346 Critical Temperature Time: 0 minutes 00:19:01.346 00:19:01.346 Number of Queues 00:19:01.346 ================ 00:19:01.346 Number of I/O Submission Queues: 64 00:19:01.346 Number of I/O Completion Queues: 64 00:19:01.346 00:19:01.346 ZNS Specific Controller Data 00:19:01.346 ============================ 00:19:01.346 Zone Append Size Limit: 0 00:19:01.346 00:19:01.346 00:19:01.346 Active Namespaces 00:19:01.346 ================= 00:19:01.346 Namespace ID:1 00:19:01.346 Error Recovery Timeout: Unlimited 00:19:01.346 Command Set Identifier: NVM (00h) 00:19:01.346 Deallocate: Supported 00:19:01.346 Deallocated/Unwritten Error: Supported 00:19:01.346 Deallocated Read Value: All 0x00 00:19:01.346 Deallocate in Write Zeroes: Not Supported 00:19:01.346 Deallocated Guard Field: 0xFFFF 00:19:01.346 Flush: Supported 00:19:01.346 Reservation: Not Supported 00:19:01.346 Metadata Transferred as: Separate Metadata Buffer 00:19:01.346 Namespace Sharing Capabilities: Private 00:19:01.346 Size (in LBAs): 1548666 (5GiB) 00:19:01.346 Capacity (in LBAs): 1548666 (5GiB) 00:19:01.346 Utilization (in LBAs): 1548666 (5GiB) 00:19:01.346 Thin Provisioning: Not Supported 00:19:01.346 Per-NS Atomic Units: No 00:19:01.346 Maximum Single Source Range Length: 128 00:19:01.346 Maximum Copy Length: 128 00:19:01.346 Maximum Source Range Count: 128 00:19:01.346 NGUID/EUI64 Never Reused: No 00:19:01.346 Namespace Write Protected: No 00:19:01.346 Number of LBA Formats: 8 00:19:01.346 Current LBA Format: LBA Format #07 00:19:01.346 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.346 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.347 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.347 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.347 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.347 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.347 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.347 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.347 00:19:01.347 NVM Specific Namespace Data 00:19:01.347 =========================== 00:19:01.347 Logical Block Storage Tag Mask: 0 00:19:01.347 Protection Information Capabilities: 00:19:01.347 16b Guard Protection Information Storage Tag Support: No 00:19:01.347 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.347 Storage Tag Check Read Support: No 00:19:01.347 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.347 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:01.347 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:19:01.607 ===================================================== 00:19:01.607 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:01.607 ===================================================== 00:19:01.607 Controller Capabilities/Features 00:19:01.607 ================================ 00:19:01.607 Vendor ID: 1b36 00:19:01.607 Subsystem Vendor ID: 1af4 00:19:01.607 Serial Number: 12341 00:19:01.607 Model Number: QEMU NVMe Ctrl 00:19:01.607 Firmware Version: 8.0.0 00:19:01.607 Recommended Arb Burst: 6 00:19:01.607 IEEE OUI Identifier: 00 54 52 00:19:01.607 Multi-path I/O 00:19:01.607 May have multiple subsystem ports: No 00:19:01.607 May have multiple controllers: No 00:19:01.607 Associated with SR-IOV VF: No 00:19:01.607 Max Data Transfer Size: 524288 00:19:01.607 Max Number of Namespaces: 256 00:19:01.607 Max Number of I/O Queues: 64 00:19:01.607 NVMe Specification Version (VS): 1.4 00:19:01.607 NVMe Specification Version (Identify): 1.4 00:19:01.607 Maximum Queue Entries: 2048 00:19:01.607 Contiguous Queues Required: Yes 00:19:01.607 Arbitration Mechanisms Supported 00:19:01.607 Weighted Round Robin: Not Supported 00:19:01.607 Vendor Specific: Not Supported 00:19:01.607 Reset Timeout: 7500 ms 00:19:01.607 Doorbell Stride: 4 bytes 00:19:01.607 NVM Subsystem Reset: Not Supported 00:19:01.607 Command Sets Supported 00:19:01.607 NVM Command Set: Supported 00:19:01.607 Boot Partition: Not Supported 00:19:01.607 Memory Page Size Minimum: 4096 bytes 00:19:01.607 Memory Page Size Maximum: 65536 bytes 00:19:01.607 Persistent Memory Region: Not Supported 00:19:01.607 Optional Asynchronous Events Supported 00:19:01.607 Namespace Attribute Notices: Supported 00:19:01.608 Firmware Activation Notices: Not Supported 00:19:01.608 ANA Change Notices: Not Supported 00:19:01.608 PLE Aggregate Log Change Notices: Not Supported 00:19:01.608 LBA Status Info Alert Notices: Not Supported 00:19:01.608 EGE Aggregate Log Change Notices: Not Supported 00:19:01.608 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.608 Zone Descriptor Change Notices: Not Supported 00:19:01.608 Discovery Log Change Notices: Not Supported 00:19:01.608 Controller Attributes 00:19:01.608 128-bit Host Identifier: Not Supported 00:19:01.608 Non-Operational Permissive Mode: Not Supported 00:19:01.608 NVM Sets: Not Supported 00:19:01.608 Read Recovery Levels: Not Supported 00:19:01.608 Endurance Groups: Not Supported 00:19:01.608 Predictable Latency Mode: Not Supported 00:19:01.608 Traffic Based Keep ALive: Not Supported 00:19:01.608 Namespace Granularity: Not Supported 00:19:01.608 SQ Associations: Not Supported 00:19:01.608 UUID List: Not Supported 00:19:01.608 Multi-Domain Subsystem: Not Supported 00:19:01.608 Fixed Capacity Management: Not Supported 00:19:01.608 Variable Capacity Management: Not Supported 00:19:01.608 Delete Endurance Group: Not Supported 00:19:01.608 Delete NVM Set: Not Supported 00:19:01.608 Extended LBA Formats Supported: Supported 00:19:01.608 Flexible Data Placement Supported: Not Supported 00:19:01.608 00:19:01.608 Controller Memory Buffer Support 00:19:01.608 ================================ 00:19:01.608 Supported: No 00:19:01.608 00:19:01.608 Persistent Memory Region Support 00:19:01.608 ================================ 00:19:01.608 Supported: No 00:19:01.608 00:19:01.608 Admin Command Set Attributes 00:19:01.608 ============================ 00:19:01.608 Security Send/Receive: Not Supported 00:19:01.608 Format NVM: Supported 00:19:01.608 Firmware Activate/Download: Not Supported 00:19:01.608 Namespace Management: Supported 00:19:01.608 Device Self-Test: Not Supported 00:19:01.608 Directives: Supported 00:19:01.608 NVMe-MI: Not Supported 00:19:01.608 Virtualization Management: Not Supported 00:19:01.608 Doorbell Buffer Config: Supported 00:19:01.608 Get LBA Status Capability: Not Supported 00:19:01.608 Command & Feature Lockdown Capability: Not Supported 00:19:01.608 Abort Command Limit: 4 00:19:01.608 Async Event Request Limit: 4 00:19:01.608 Number of Firmware Slots: N/A 00:19:01.608 Firmware Slot 1 Read-Only: N/A 00:19:01.608 Firmware Activation Without Reset: N/A 00:19:01.608 Multiple Update Detection Support: N/A 00:19:01.608 Firmware Update Granularity: No Information Provided 00:19:01.608 Per-Namespace SMART Log: Yes 00:19:01.608 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.608 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:01.608 Command Effects Log Page: Supported 00:19:01.608 Get Log Page Extended Data: Supported 00:19:01.608 Telemetry Log Pages: Not Supported 00:19:01.608 Persistent Event Log Pages: Not Supported 00:19:01.608 Supported Log Pages Log Page: May Support 00:19:01.608 Commands Supported & Effects Log Page: Not Supported 00:19:01.608 Feature Identifiers & Effects Log Page:May Support 00:19:01.608 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.608 Data Area 4 for Telemetry Log: Not Supported 00:19:01.608 Error Log Page Entries Supported: 1 00:19:01.608 Keep Alive: Not Supported 00:19:01.608 00:19:01.608 NVM Command Set Attributes 00:19:01.608 ========================== 00:19:01.608 Submission Queue Entry Size 00:19:01.608 Max: 64 00:19:01.608 Min: 64 00:19:01.608 Completion Queue Entry Size 00:19:01.608 Max: 16 00:19:01.608 Min: 16 00:19:01.608 Number of Namespaces: 256 00:19:01.608 Compare Command: Supported 00:19:01.608 Write Uncorrectable Command: Not Supported 00:19:01.608 Dataset Management Command: Supported 00:19:01.608 Write Zeroes Command: Supported 00:19:01.608 Set Features Save Field: Supported 00:19:01.608 Reservations: Not Supported 00:19:01.608 Timestamp: Supported 00:19:01.608 Copy: Supported 00:19:01.608 Volatile Write Cache: Present 00:19:01.608 Atomic Write Unit (Normal): 1 00:19:01.608 Atomic Write Unit (PFail): 1 00:19:01.608 Atomic Compare & Write Unit: 1 00:19:01.608 Fused Compare & Write: Not Supported 00:19:01.608 Scatter-Gather List 00:19:01.608 SGL Command Set: Supported 00:19:01.608 SGL Keyed: Not Supported 00:19:01.608 SGL Bit Bucket Descriptor: Not Supported 00:19:01.608 SGL Metadata Pointer: Not Supported 00:19:01.608 Oversized SGL: Not Supported 00:19:01.608 SGL Metadata Address: Not Supported 00:19:01.608 SGL Offset: Not Supported 00:19:01.608 Transport SGL Data Block: Not Supported 00:19:01.608 Replay Protected Memory Block: Not Supported 00:19:01.608 00:19:01.608 Firmware Slot Information 00:19:01.608 ========================= 00:19:01.608 Active slot: 1 00:19:01.608 Slot 1 Firmware Revision: 1.0 00:19:01.608 00:19:01.608 00:19:01.608 Commands Supported and Effects 00:19:01.608 ============================== 00:19:01.608 Admin Commands 00:19:01.608 -------------- 00:19:01.608 Delete I/O Submission Queue (00h): Supported 00:19:01.608 Create I/O Submission Queue (01h): Supported 00:19:01.608 Get Log Page (02h): Supported 00:19:01.608 Delete I/O Completion Queue (04h): Supported 00:19:01.608 Create I/O Completion Queue (05h): Supported 00:19:01.608 Identify (06h): Supported 00:19:01.608 Abort (08h): Supported 00:19:01.608 Set Features (09h): Supported 00:19:01.608 Get Features (0Ah): Supported 00:19:01.608 Asynchronous Event Request (0Ch): Supported 00:19:01.608 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.608 Directive Send (19h): Supported 00:19:01.608 Directive Receive (1Ah): Supported 00:19:01.608 Virtualization Management (1Ch): Supported 00:19:01.608 Doorbell Buffer Config (7Ch): Supported 00:19:01.608 Format NVM (80h): Supported LBA-Change 00:19:01.608 I/O Commands 00:19:01.608 ------------ 00:19:01.608 Flush (00h): Supported LBA-Change 00:19:01.608 Write (01h): Supported LBA-Change 00:19:01.608 Read (02h): Supported 00:19:01.608 Compare (05h): Supported 00:19:01.608 Write Zeroes (08h): Supported LBA-Change 00:19:01.608 Dataset Management (09h): Supported LBA-Change 00:19:01.608 Unknown (0Ch): Supported 00:19:01.608 Unknown (12h): Supported 00:19:01.608 Copy (19h): Supported LBA-Change 00:19:01.608 Unknown (1Dh): Supported LBA-Change 00:19:01.608 00:19:01.608 Error Log 00:19:01.608 ========= 00:19:01.608 00:19:01.608 Arbitration 00:19:01.608 =========== 00:19:01.608 Arbitration Burst: no limit 00:19:01.608 00:19:01.608 Power Management 00:19:01.608 ================ 00:19:01.608 Number of Power States: 1 00:19:01.608 Current Power State: Power State #0 00:19:01.608 Power State #0: 00:19:01.608 Max Power: 25.00 W 00:19:01.608 Non-Operational State: Operational 00:19:01.608 Entry Latency: 16 microseconds 00:19:01.608 Exit Latency: 4 microseconds 00:19:01.608 Relative Read Throughput: 0 00:19:01.608 Relative Read Latency: 0 00:19:01.608 Relative Write Throughput: 0 00:19:01.608 Relative Write Latency: 0 00:19:01.608 Idle Power: Not Reported 00:19:01.608 Active Power: Not Reported 00:19:01.608 Non-Operational Permissive Mode: Not Supported 00:19:01.608 00:19:01.608 Health Information 00:19:01.608 ================== 00:19:01.608 Critical Warnings: 00:19:01.608 Available Spare Space: OK 00:19:01.608 Temperature: OK 00:19:01.608 Device Reliability: OK 00:19:01.608 Read Only: No 00:19:01.608 Volatile Memory Backup: OK 00:19:01.608 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.608 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.608 Available Spare: 0% 00:19:01.608 Available Spare Threshold: 0% 00:19:01.608 Life Percentage Used: 0% 00:19:01.608 Data Units Read: 914 00:19:01.608 Data Units Written: 787 00:19:01.608 Host Read Commands: 46439 00:19:01.608 Host Write Commands: 45338 00:19:01.608 Controller Busy Time: 0 minutes 00:19:01.608 Power Cycles: 0 00:19:01.608 Power On Hours: 0 hours 00:19:01.608 Unsafe Shutdowns: 0 00:19:01.608 Unrecoverable Media Errors: 0 00:19:01.608 Lifetime Error Log Entries: 0 00:19:01.608 Warning Temperature Time: 0 minutes 00:19:01.608 Critical Temperature Time: 0 minutes 00:19:01.608 00:19:01.608 Number of Queues 00:19:01.608 ================ 00:19:01.608 Number of I/O Submission Queues: 64 00:19:01.608 Number of I/O Completion Queues: 64 00:19:01.608 00:19:01.608 ZNS Specific Controller Data 00:19:01.608 ============================ 00:19:01.608 Zone Append Size Limit: 0 00:19:01.608 00:19:01.608 00:19:01.608 Active Namespaces 00:19:01.608 ================= 00:19:01.608 Namespace ID:1 00:19:01.608 Error Recovery Timeout: Unlimited 00:19:01.608 Command Set Identifier: NVM (00h) 00:19:01.608 Deallocate: Supported 00:19:01.609 Deallocated/Unwritten Error: Supported 00:19:01.609 Deallocated Read Value: All 0x00 00:19:01.609 Deallocate in Write Zeroes: Not Supported 00:19:01.609 Deallocated Guard Field: 0xFFFF 00:19:01.609 Flush: Supported 00:19:01.609 Reservation: Not Supported 00:19:01.609 Namespace Sharing Capabilities: Private 00:19:01.609 Size (in LBAs): 1310720 (5GiB) 00:19:01.609 Capacity (in LBAs): 1310720 (5GiB) 00:19:01.609 Utilization (in LBAs): 1310720 (5GiB) 00:19:01.609 Thin Provisioning: Not Supported 00:19:01.609 Per-NS Atomic Units: No 00:19:01.609 Maximum Single Source Range Length: 128 00:19:01.609 Maximum Copy Length: 128 00:19:01.609 Maximum Source Range Count: 128 00:19:01.609 NGUID/EUI64 Never Reused: No 00:19:01.609 Namespace Write Protected: No 00:19:01.609 Number of LBA Formats: 8 00:19:01.609 Current LBA Format: LBA Format #04 00:19:01.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.609 00:19:01.609 NVM Specific Namespace Data 00:19:01.609 =========================== 00:19:01.609 Logical Block Storage Tag Mask: 0 00:19:01.609 Protection Information Capabilities: 00:19:01.609 16b Guard Protection Information Storage Tag Support: No 00:19:01.609 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.609 Storage Tag Check Read Support: No 00:19:01.609 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.609 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:01.609 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:19:01.869 ===================================================== 00:19:01.869 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:01.869 ===================================================== 00:19:01.869 Controller Capabilities/Features 00:19:01.869 ================================ 00:19:01.869 Vendor ID: 1b36 00:19:01.869 Subsystem Vendor ID: 1af4 00:19:01.869 Serial Number: 12342 00:19:01.869 Model Number: QEMU NVMe Ctrl 00:19:01.869 Firmware Version: 8.0.0 00:19:01.869 Recommended Arb Burst: 6 00:19:01.869 IEEE OUI Identifier: 00 54 52 00:19:01.869 Multi-path I/O 00:19:01.869 May have multiple subsystem ports: No 00:19:01.869 May have multiple controllers: No 00:19:01.869 Associated with SR-IOV VF: No 00:19:01.869 Max Data Transfer Size: 524288 00:19:01.869 Max Number of Namespaces: 256 00:19:01.869 Max Number of I/O Queues: 64 00:19:01.869 NVMe Specification Version (VS): 1.4 00:19:01.869 NVMe Specification Version (Identify): 1.4 00:19:01.869 Maximum Queue Entries: 2048 00:19:01.869 Contiguous Queues Required: Yes 00:19:01.869 Arbitration Mechanisms Supported 00:19:01.869 Weighted Round Robin: Not Supported 00:19:01.869 Vendor Specific: Not Supported 00:19:01.869 Reset Timeout: 7500 ms 00:19:01.869 Doorbell Stride: 4 bytes 00:19:01.869 NVM Subsystem Reset: Not Supported 00:19:01.869 Command Sets Supported 00:19:01.869 NVM Command Set: Supported 00:19:01.869 Boot Partition: Not Supported 00:19:01.869 Memory Page Size Minimum: 4096 bytes 00:19:01.869 Memory Page Size Maximum: 65536 bytes 00:19:01.869 Persistent Memory Region: Not Supported 00:19:01.869 Optional Asynchronous Events Supported 00:19:01.869 Namespace Attribute Notices: Supported 00:19:01.869 Firmware Activation Notices: Not Supported 00:19:01.869 ANA Change Notices: Not Supported 00:19:01.869 PLE Aggregate Log Change Notices: Not Supported 00:19:01.869 LBA Status Info Alert Notices: Not Supported 00:19:01.869 EGE Aggregate Log Change Notices: Not Supported 00:19:01.869 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.869 Zone Descriptor Change Notices: Not Supported 00:19:01.869 Discovery Log Change Notices: Not Supported 00:19:01.869 Controller Attributes 00:19:01.869 128-bit Host Identifier: Not Supported 00:19:01.869 Non-Operational Permissive Mode: Not Supported 00:19:01.869 NVM Sets: Not Supported 00:19:01.869 Read Recovery Levels: Not Supported 00:19:01.869 Endurance Groups: Not Supported 00:19:01.869 Predictable Latency Mode: Not Supported 00:19:01.869 Traffic Based Keep ALive: Not Supported 00:19:01.869 Namespace Granularity: Not Supported 00:19:01.869 SQ Associations: Not Supported 00:19:01.869 UUID List: Not Supported 00:19:01.869 Multi-Domain Subsystem: Not Supported 00:19:01.869 Fixed Capacity Management: Not Supported 00:19:01.869 Variable Capacity Management: Not Supported 00:19:01.869 Delete Endurance Group: Not Supported 00:19:01.869 Delete NVM Set: Not Supported 00:19:01.869 Extended LBA Formats Supported: Supported 00:19:01.869 Flexible Data Placement Supported: Not Supported 00:19:01.869 00:19:01.869 Controller Memory Buffer Support 00:19:01.869 ================================ 00:19:01.869 Supported: No 00:19:01.869 00:19:01.869 Persistent Memory Region Support 00:19:01.869 ================================ 00:19:01.869 Supported: No 00:19:01.869 00:19:01.869 Admin Command Set Attributes 00:19:01.869 ============================ 00:19:01.869 Security Send/Receive: Not Supported 00:19:01.869 Format NVM: Supported 00:19:01.869 Firmware Activate/Download: Not Supported 00:19:01.869 Namespace Management: Supported 00:19:01.869 Device Self-Test: Not Supported 00:19:01.869 Directives: Supported 00:19:01.869 NVMe-MI: Not Supported 00:19:01.869 Virtualization Management: Not Supported 00:19:01.869 Doorbell Buffer Config: Supported 00:19:01.869 Get LBA Status Capability: Not Supported 00:19:01.870 Command & Feature Lockdown Capability: Not Supported 00:19:01.870 Abort Command Limit: 4 00:19:01.870 Async Event Request Limit: 4 00:19:01.870 Number of Firmware Slots: N/A 00:19:01.870 Firmware Slot 1 Read-Only: N/A 00:19:01.870 Firmware Activation Without Reset: N/A 00:19:01.870 Multiple Update Detection Support: N/A 00:19:01.870 Firmware Update Granularity: No Information Provided 00:19:01.870 Per-Namespace SMART Log: Yes 00:19:01.870 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.870 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:01.870 Command Effects Log Page: Supported 00:19:01.870 Get Log Page Extended Data: Supported 00:19:01.870 Telemetry Log Pages: Not Supported 00:19:01.870 Persistent Event Log Pages: Not Supported 00:19:01.870 Supported Log Pages Log Page: May Support 00:19:01.870 Commands Supported & Effects Log Page: Not Supported 00:19:01.870 Feature Identifiers & Effects Log Page:May Support 00:19:01.870 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.870 Data Area 4 for Telemetry Log: Not Supported 00:19:01.870 Error Log Page Entries Supported: 1 00:19:01.870 Keep Alive: Not Supported 00:19:01.870 00:19:01.870 NVM Command Set Attributes 00:19:01.870 ========================== 00:19:01.870 Submission Queue Entry Size 00:19:01.870 Max: 64 00:19:01.870 Min: 64 00:19:01.870 Completion Queue Entry Size 00:19:01.870 Max: 16 00:19:01.870 Min: 16 00:19:01.870 Number of Namespaces: 256 00:19:01.870 Compare Command: Supported 00:19:01.870 Write Uncorrectable Command: Not Supported 00:19:01.870 Dataset Management Command: Supported 00:19:01.870 Write Zeroes Command: Supported 00:19:01.870 Set Features Save Field: Supported 00:19:01.870 Reservations: Not Supported 00:19:01.870 Timestamp: Supported 00:19:01.870 Copy: Supported 00:19:01.870 Volatile Write Cache: Present 00:19:01.870 Atomic Write Unit (Normal): 1 00:19:01.870 Atomic Write Unit (PFail): 1 00:19:01.870 Atomic Compare & Write Unit: 1 00:19:01.870 Fused Compare & Write: Not Supported 00:19:01.870 Scatter-Gather List 00:19:01.870 SGL Command Set: Supported 00:19:01.870 SGL Keyed: Not Supported 00:19:01.870 SGL Bit Bucket Descriptor: Not Supported 00:19:01.870 SGL Metadata Pointer: Not Supported 00:19:01.870 Oversized SGL: Not Supported 00:19:01.870 SGL Metadata Address: Not Supported 00:19:01.870 SGL Offset: Not Supported 00:19:01.870 Transport SGL Data Block: Not Supported 00:19:01.870 Replay Protected Memory Block: Not Supported 00:19:01.870 00:19:01.870 Firmware Slot Information 00:19:01.870 ========================= 00:19:01.870 Active slot: 1 00:19:01.870 Slot 1 Firmware Revision: 1.0 00:19:01.870 00:19:01.870 00:19:01.870 Commands Supported and Effects 00:19:01.870 ============================== 00:19:01.870 Admin Commands 00:19:01.870 -------------- 00:19:01.870 Delete I/O Submission Queue (00h): Supported 00:19:01.870 Create I/O Submission Queue (01h): Supported 00:19:01.870 Get Log Page (02h): Supported 00:19:01.870 Delete I/O Completion Queue (04h): Supported 00:19:01.870 Create I/O Completion Queue (05h): Supported 00:19:01.870 Identify (06h): Supported 00:19:01.870 Abort (08h): Supported 00:19:01.870 Set Features (09h): Supported 00:19:01.870 Get Features (0Ah): Supported 00:19:01.870 Asynchronous Event Request (0Ch): Supported 00:19:01.870 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:01.870 Directive Send (19h): Supported 00:19:01.870 Directive Receive (1Ah): Supported 00:19:01.870 Virtualization Management (1Ch): Supported 00:19:01.870 Doorbell Buffer Config (7Ch): Supported 00:19:01.870 Format NVM (80h): Supported LBA-Change 00:19:01.870 I/O Commands 00:19:01.870 ------------ 00:19:01.870 Flush (00h): Supported LBA-Change 00:19:01.870 Write (01h): Supported LBA-Change 00:19:01.870 Read (02h): Supported 00:19:01.870 Compare (05h): Supported 00:19:01.870 Write Zeroes (08h): Supported LBA-Change 00:19:01.870 Dataset Management (09h): Supported LBA-Change 00:19:01.870 Unknown (0Ch): Supported 00:19:01.870 Unknown (12h): Supported 00:19:01.870 Copy (19h): Supported LBA-Change 00:19:01.870 Unknown (1Dh): Supported LBA-Change 00:19:01.870 00:19:01.870 Error Log 00:19:01.870 ========= 00:19:01.870 00:19:01.870 Arbitration 00:19:01.870 =========== 00:19:01.870 Arbitration Burst: no limit 00:19:01.870 00:19:01.870 Power Management 00:19:01.870 ================ 00:19:01.870 Number of Power States: 1 00:19:01.870 Current Power State: Power State #0 00:19:01.870 Power State #0: 00:19:01.870 Max Power: 25.00 W 00:19:01.870 Non-Operational State: Operational 00:19:01.870 Entry Latency: 16 microseconds 00:19:01.870 Exit Latency: 4 microseconds 00:19:01.870 Relative Read Throughput: 0 00:19:01.870 Relative Read Latency: 0 00:19:01.870 Relative Write Throughput: 0 00:19:01.870 Relative Write Latency: 0 00:19:01.870 Idle Power: Not Reported 00:19:01.870 Active Power: Not Reported 00:19:01.870 Non-Operational Permissive Mode: Not Supported 00:19:01.870 00:19:01.870 Health Information 00:19:01.870 ================== 00:19:01.870 Critical Warnings: 00:19:01.870 Available Spare Space: OK 00:19:01.870 Temperature: OK 00:19:01.870 Device Reliability: OK 00:19:01.870 Read Only: No 00:19:01.870 Volatile Memory Backup: OK 00:19:01.870 Current Temperature: 323 Kelvin (50 Celsius) 00:19:01.870 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:01.870 Available Spare: 0% 00:19:01.870 Available Spare Threshold: 0% 00:19:01.870 Life Percentage Used: 0% 00:19:01.870 Data Units Read: 1962 00:19:01.870 Data Units Written: 1749 00:19:01.870 Host Read Commands: 96629 00:19:01.870 Host Write Commands: 94898 00:19:01.870 Controller Busy Time: 0 minutes 00:19:01.870 Power Cycles: 0 00:19:01.870 Power On Hours: 0 hours 00:19:01.870 Unsafe Shutdowns: 0 00:19:01.870 Unrecoverable Media Errors: 0 00:19:01.870 Lifetime Error Log Entries: 0 00:19:01.870 Warning Temperature Time: 0 minutes 00:19:01.870 Critical Temperature Time: 0 minutes 00:19:01.870 00:19:01.870 Number of Queues 00:19:01.870 ================ 00:19:01.870 Number of I/O Submission Queues: 64 00:19:01.870 Number of I/O Completion Queues: 64 00:19:01.870 00:19:01.870 ZNS Specific Controller Data 00:19:01.870 ============================ 00:19:01.870 Zone Append Size Limit: 0 00:19:01.870 00:19:01.870 00:19:01.870 Active Namespaces 00:19:01.870 ================= 00:19:01.870 Namespace ID:1 00:19:01.870 Error Recovery Timeout: Unlimited 00:19:01.870 Command Set Identifier: NVM (00h) 00:19:01.870 Deallocate: Supported 00:19:01.870 Deallocated/Unwritten Error: Supported 00:19:01.870 Deallocated Read Value: All 0x00 00:19:01.870 Deallocate in Write Zeroes: Not Supported 00:19:01.870 Deallocated Guard Field: 0xFFFF 00:19:01.870 Flush: Supported 00:19:01.870 Reservation: Not Supported 00:19:01.870 Namespace Sharing Capabilities: Private 00:19:01.870 Size (in LBAs): 1048576 (4GiB) 00:19:01.870 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.870 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.870 Thin Provisioning: Not Supported 00:19:01.870 Per-NS Atomic Units: No 00:19:01.870 Maximum Single Source Range Length: 128 00:19:01.870 Maximum Copy Length: 128 00:19:01.870 Maximum Source Range Count: 128 00:19:01.870 NGUID/EUI64 Never Reused: No 00:19:01.870 Namespace Write Protected: No 00:19:01.870 Number of LBA Formats: 8 00:19:01.870 Current LBA Format: LBA Format #04 00:19:01.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.870 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.870 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.870 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.870 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.871 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.871 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.871 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.871 00:19:01.871 NVM Specific Namespace Data 00:19:01.871 =========================== 00:19:01.871 Logical Block Storage Tag Mask: 0 00:19:01.871 Protection Information Capabilities: 00:19:01.871 16b Guard Protection Information Storage Tag Support: No 00:19:01.871 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.871 Storage Tag Check Read Support: No 00:19:01.871 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Namespace ID:2 00:19:01.871 Error Recovery Timeout: Unlimited 00:19:01.871 Command Set Identifier: NVM (00h) 00:19:01.871 Deallocate: Supported 00:19:01.871 Deallocated/Unwritten Error: Supported 00:19:01.871 Deallocated Read Value: All 0x00 00:19:01.871 Deallocate in Write Zeroes: Not Supported 00:19:01.871 Deallocated Guard Field: 0xFFFF 00:19:01.871 Flush: Supported 00:19:01.871 Reservation: Not Supported 00:19:01.871 Namespace Sharing Capabilities: Private 00:19:01.871 Size (in LBAs): 1048576 (4GiB) 00:19:01.871 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.871 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.871 Thin Provisioning: Not Supported 00:19:01.871 Per-NS Atomic Units: No 00:19:01.871 Maximum Single Source Range Length: 128 00:19:01.871 Maximum Copy Length: 128 00:19:01.871 Maximum Source Range Count: 128 00:19:01.871 NGUID/EUI64 Never Reused: No 00:19:01.871 Namespace Write Protected: No 00:19:01.871 Number of LBA Formats: 8 00:19:01.871 Current LBA Format: LBA Format #04 00:19:01.871 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.871 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.871 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.871 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.871 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.871 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.871 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.871 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.871 00:19:01.871 NVM Specific Namespace Data 00:19:01.871 =========================== 00:19:01.871 Logical Block Storage Tag Mask: 0 00:19:01.871 Protection Information Capabilities: 00:19:01.871 16b Guard Protection Information Storage Tag Support: No 00:19:01.871 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.871 Storage Tag Check Read Support: No 00:19:01.871 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Namespace ID:3 00:19:01.871 Error Recovery Timeout: Unlimited 00:19:01.871 Command Set Identifier: NVM (00h) 00:19:01.871 Deallocate: Supported 00:19:01.871 Deallocated/Unwritten Error: Supported 00:19:01.871 Deallocated Read Value: All 0x00 00:19:01.871 Deallocate in Write Zeroes: Not Supported 00:19:01.871 Deallocated Guard Field: 0xFFFF 00:19:01.871 Flush: Supported 00:19:01.871 Reservation: Not Supported 00:19:01.871 Namespace Sharing Capabilities: Private 00:19:01.871 Size (in LBAs): 1048576 (4GiB) 00:19:01.871 Capacity (in LBAs): 1048576 (4GiB) 00:19:01.871 Utilization (in LBAs): 1048576 (4GiB) 00:19:01.871 Thin Provisioning: Not Supported 00:19:01.871 Per-NS Atomic Units: No 00:19:01.871 Maximum Single Source Range Length: 128 00:19:01.871 Maximum Copy Length: 128 00:19:01.871 Maximum Source Range Count: 128 00:19:01.871 NGUID/EUI64 Never Reused: No 00:19:01.871 Namespace Write Protected: No 00:19:01.871 Number of LBA Formats: 8 00:19:01.871 Current LBA Format: LBA Format #04 00:19:01.871 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.871 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:01.871 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:01.871 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:01.871 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:01.871 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:01.871 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:01.871 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:01.871 00:19:01.871 NVM Specific Namespace Data 00:19:01.871 =========================== 00:19:01.871 Logical Block Storage Tag Mask: 0 00:19:01.871 Protection Information Capabilities: 00:19:01.871 16b Guard Protection Information Storage Tag Support: No 00:19:01.871 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:01.871 Storage Tag Check Read Support: No 00:19:01.871 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:01.871 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:01.871 04:41:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:19:02.131 ===================================================== 00:19:02.131 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:02.131 ===================================================== 00:19:02.131 Controller Capabilities/Features 00:19:02.131 ================================ 00:19:02.131 Vendor ID: 1b36 00:19:02.131 Subsystem Vendor ID: 1af4 00:19:02.131 Serial Number: 12343 00:19:02.131 Model Number: QEMU NVMe Ctrl 00:19:02.131 Firmware Version: 8.0.0 00:19:02.131 Recommended Arb Burst: 6 00:19:02.131 IEEE OUI Identifier: 00 54 52 00:19:02.131 Multi-path I/O 00:19:02.131 May have multiple subsystem ports: No 00:19:02.131 May have multiple controllers: Yes 00:19:02.131 Associated with SR-IOV VF: No 00:19:02.131 Max Data Transfer Size: 524288 00:19:02.131 Max Number of Namespaces: 256 00:19:02.131 Max Number of I/O Queues: 64 00:19:02.131 NVMe Specification Version (VS): 1.4 00:19:02.131 NVMe Specification Version (Identify): 1.4 00:19:02.131 Maximum Queue Entries: 2048 00:19:02.131 Contiguous Queues Required: Yes 00:19:02.131 Arbitration Mechanisms Supported 00:19:02.131 Weighted Round Robin: Not Supported 00:19:02.131 Vendor Specific: Not Supported 00:19:02.131 Reset Timeout: 7500 ms 00:19:02.131 Doorbell Stride: 4 bytes 00:19:02.131 NVM Subsystem Reset: Not Supported 00:19:02.131 Command Sets Supported 00:19:02.131 NVM Command Set: Supported 00:19:02.131 Boot Partition: Not Supported 00:19:02.131 Memory Page Size Minimum: 4096 bytes 00:19:02.131 Memory Page Size Maximum: 65536 bytes 00:19:02.131 Persistent Memory Region: Not Supported 00:19:02.131 Optional Asynchronous Events Supported 00:19:02.131 Namespace Attribute Notices: Supported 00:19:02.131 Firmware Activation Notices: Not Supported 00:19:02.131 ANA Change Notices: Not Supported 00:19:02.131 PLE Aggregate Log Change Notices: Not Supported 00:19:02.131 LBA Status Info Alert Notices: Not Supported 00:19:02.131 EGE Aggregate Log Change Notices: Not Supported 00:19:02.131 Normal NVM Subsystem Shutdown event: Not Supported 00:19:02.131 Zone Descriptor Change Notices: Not Supported 00:19:02.131 Discovery Log Change Notices: Not Supported 00:19:02.131 Controller Attributes 00:19:02.131 128-bit Host Identifier: Not Supported 00:19:02.131 Non-Operational Permissive Mode: Not Supported 00:19:02.131 NVM Sets: Not Supported 00:19:02.131 Read Recovery Levels: Not Supported 00:19:02.131 Endurance Groups: Supported 00:19:02.131 Predictable Latency Mode: Not Supported 00:19:02.131 Traffic Based Keep ALive: Not Supported 00:19:02.131 Namespace Granularity: Not Supported 00:19:02.131 SQ Associations: Not Supported 00:19:02.131 UUID List: Not Supported 00:19:02.131 Multi-Domain Subsystem: Not Supported 00:19:02.131 Fixed Capacity Management: Not Supported 00:19:02.131 Variable Capacity Management: Not Supported 00:19:02.131 Delete Endurance Group: Not Supported 00:19:02.131 Delete NVM Set: Not Supported 00:19:02.131 Extended LBA Formats Supported: Supported 00:19:02.131 Flexible Data Placement Supported: Supported 00:19:02.131 00:19:02.131 Controller Memory Buffer Support 00:19:02.131 ================================ 00:19:02.131 Supported: No 00:19:02.131 00:19:02.131 Persistent Memory Region Support 00:19:02.131 ================================ 00:19:02.131 Supported: No 00:19:02.131 00:19:02.131 Admin Command Set Attributes 00:19:02.131 ============================ 00:19:02.131 Security Send/Receive: Not Supported 00:19:02.131 Format NVM: Supported 00:19:02.131 Firmware Activate/Download: Not Supported 00:19:02.131 Namespace Management: Supported 00:19:02.131 Device Self-Test: Not Supported 00:19:02.131 Directives: Supported 00:19:02.131 NVMe-MI: Not Supported 00:19:02.131 Virtualization Management: Not Supported 00:19:02.131 Doorbell Buffer Config: Supported 00:19:02.131 Get LBA Status Capability: Not Supported 00:19:02.131 Command & Feature Lockdown Capability: Not Supported 00:19:02.131 Abort Command Limit: 4 00:19:02.131 Async Event Request Limit: 4 00:19:02.131 Number of Firmware Slots: N/A 00:19:02.131 Firmware Slot 1 Read-Only: N/A 00:19:02.131 Firmware Activation Without Reset: N/A 00:19:02.131 Multiple Update Detection Support: N/A 00:19:02.131 Firmware Update Granularity: No Information Provided 00:19:02.131 Per-Namespace SMART Log: Yes 00:19:02.131 Asymmetric Namespace Access Log Page: Not Supported 00:19:02.131 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:02.131 Command Effects Log Page: Supported 00:19:02.131 Get Log Page Extended Data: Supported 00:19:02.131 Telemetry Log Pages: Not Supported 00:19:02.131 Persistent Event Log Pages: Not Supported 00:19:02.131 Supported Log Pages Log Page: May Support 00:19:02.131 Commands Supported & Effects Log Page: Not Supported 00:19:02.131 Feature Identifiers & Effects Log Page:May Support 00:19:02.131 NVMe-MI Commands & Effects Log Page: May Support 00:19:02.131 Data Area 4 for Telemetry Log: Not Supported 00:19:02.131 Error Log Page Entries Supported: 1 00:19:02.131 Keep Alive: Not Supported 00:19:02.131 00:19:02.131 NVM Command Set Attributes 00:19:02.131 ========================== 00:19:02.131 Submission Queue Entry Size 00:19:02.132 Max: 64 00:19:02.132 Min: 64 00:19:02.132 Completion Queue Entry Size 00:19:02.132 Max: 16 00:19:02.132 Min: 16 00:19:02.132 Number of Namespaces: 256 00:19:02.132 Compare Command: Supported 00:19:02.132 Write Uncorrectable Command: Not Supported 00:19:02.132 Dataset Management Command: Supported 00:19:02.132 Write Zeroes Command: Supported 00:19:02.132 Set Features Save Field: Supported 00:19:02.132 Reservations: Not Supported 00:19:02.132 Timestamp: Supported 00:19:02.132 Copy: Supported 00:19:02.132 Volatile Write Cache: Present 00:19:02.132 Atomic Write Unit (Normal): 1 00:19:02.132 Atomic Write Unit (PFail): 1 00:19:02.132 Atomic Compare & Write Unit: 1 00:19:02.132 Fused Compare & Write: Not Supported 00:19:02.132 Scatter-Gather List 00:19:02.132 SGL Command Set: Supported 00:19:02.132 SGL Keyed: Not Supported 00:19:02.132 SGL Bit Bucket Descriptor: Not Supported 00:19:02.132 SGL Metadata Pointer: Not Supported 00:19:02.132 Oversized SGL: Not Supported 00:19:02.132 SGL Metadata Address: Not Supported 00:19:02.132 SGL Offset: Not Supported 00:19:02.132 Transport SGL Data Block: Not Supported 00:19:02.132 Replay Protected Memory Block: Not Supported 00:19:02.132 00:19:02.132 Firmware Slot Information 00:19:02.132 ========================= 00:19:02.132 Active slot: 1 00:19:02.132 Slot 1 Firmware Revision: 1.0 00:19:02.132 00:19:02.132 00:19:02.132 Commands Supported and Effects 00:19:02.132 ============================== 00:19:02.132 Admin Commands 00:19:02.132 -------------- 00:19:02.132 Delete I/O Submission Queue (00h): Supported 00:19:02.132 Create I/O Submission Queue (01h): Supported 00:19:02.132 Get Log Page (02h): Supported 00:19:02.132 Delete I/O Completion Queue (04h): Supported 00:19:02.132 Create I/O Completion Queue (05h): Supported 00:19:02.132 Identify (06h): Supported 00:19:02.132 Abort (08h): Supported 00:19:02.132 Set Features (09h): Supported 00:19:02.132 Get Features (0Ah): Supported 00:19:02.132 Asynchronous Event Request (0Ch): Supported 00:19:02.132 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:02.132 Directive Send (19h): Supported 00:19:02.132 Directive Receive (1Ah): Supported 00:19:02.132 Virtualization Management (1Ch): Supported 00:19:02.132 Doorbell Buffer Config (7Ch): Supported 00:19:02.132 Format NVM (80h): Supported LBA-Change 00:19:02.132 I/O Commands 00:19:02.132 ------------ 00:19:02.132 Flush (00h): Supported LBA-Change 00:19:02.132 Write (01h): Supported LBA-Change 00:19:02.132 Read (02h): Supported 00:19:02.132 Compare (05h): Supported 00:19:02.132 Write Zeroes (08h): Supported LBA-Change 00:19:02.132 Dataset Management (09h): Supported LBA-Change 00:19:02.132 Unknown (0Ch): Supported 00:19:02.132 Unknown (12h): Supported 00:19:02.132 Copy (19h): Supported LBA-Change 00:19:02.132 Unknown (1Dh): Supported LBA-Change 00:19:02.132 00:19:02.132 Error Log 00:19:02.132 ========= 00:19:02.132 00:19:02.132 Arbitration 00:19:02.132 =========== 00:19:02.132 Arbitration Burst: no limit 00:19:02.132 00:19:02.132 Power Management 00:19:02.132 ================ 00:19:02.132 Number of Power States: 1 00:19:02.132 Current Power State: Power State #0 00:19:02.132 Power State #0: 00:19:02.132 Max Power: 25.00 W 00:19:02.132 Non-Operational State: Operational 00:19:02.132 Entry Latency: 16 microseconds 00:19:02.132 Exit Latency: 4 microseconds 00:19:02.132 Relative Read Throughput: 0 00:19:02.132 Relative Read Latency: 0 00:19:02.132 Relative Write Throughput: 0 00:19:02.132 Relative Write Latency: 0 00:19:02.132 Idle Power: Not Reported 00:19:02.132 Active Power: Not Reported 00:19:02.132 Non-Operational Permissive Mode: Not Supported 00:19:02.132 00:19:02.132 Health Information 00:19:02.132 ================== 00:19:02.132 Critical Warnings: 00:19:02.132 Available Spare Space: OK 00:19:02.132 Temperature: OK 00:19:02.132 Device Reliability: OK 00:19:02.132 Read Only: No 00:19:02.132 Volatile Memory Backup: OK 00:19:02.132 Current Temperature: 323 Kelvin (50 Celsius) 00:19:02.132 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:02.132 Available Spare: 0% 00:19:02.132 Available Spare Threshold: 0% 00:19:02.132 Life Percentage Used: 0% 00:19:02.132 Data Units Read: 752 00:19:02.132 Data Units Written: 681 00:19:02.132 Host Read Commands: 33134 00:19:02.132 Host Write Commands: 32557 00:19:02.132 Controller Busy Time: 0 minutes 00:19:02.132 Power Cycles: 0 00:19:02.132 Power On Hours: 0 hours 00:19:02.132 Unsafe Shutdowns: 0 00:19:02.132 Unrecoverable Media Errors: 0 00:19:02.132 Lifetime Error Log Entries: 0 00:19:02.132 Warning Temperature Time: 0 minutes 00:19:02.132 Critical Temperature Time: 0 minutes 00:19:02.132 00:19:02.132 Number of Queues 00:19:02.132 ================ 00:19:02.132 Number of I/O Submission Queues: 64 00:19:02.132 Number of I/O Completion Queues: 64 00:19:02.132 00:19:02.132 ZNS Specific Controller Data 00:19:02.132 ============================ 00:19:02.132 Zone Append Size Limit: 0 00:19:02.132 00:19:02.132 00:19:02.132 Active Namespaces 00:19:02.132 ================= 00:19:02.132 Namespace ID:1 00:19:02.132 Error Recovery Timeout: Unlimited 00:19:02.132 Command Set Identifier: NVM (00h) 00:19:02.132 Deallocate: Supported 00:19:02.132 Deallocated/Unwritten Error: Supported 00:19:02.132 Deallocated Read Value: All 0x00 00:19:02.132 Deallocate in Write Zeroes: Not Supported 00:19:02.132 Deallocated Guard Field: 0xFFFF 00:19:02.132 Flush: Supported 00:19:02.132 Reservation: Not Supported 00:19:02.132 Namespace Sharing Capabilities: Multiple Controllers 00:19:02.132 Size (in LBAs): 262144 (1GiB) 00:19:02.132 Capacity (in LBAs): 262144 (1GiB) 00:19:02.132 Utilization (in LBAs): 262144 (1GiB) 00:19:02.132 Thin Provisioning: Not Supported 00:19:02.132 Per-NS Atomic Units: No 00:19:02.132 Maximum Single Source Range Length: 128 00:19:02.132 Maximum Copy Length: 128 00:19:02.132 Maximum Source Range Count: 128 00:19:02.132 NGUID/EUI64 Never Reused: No 00:19:02.132 Namespace Write Protected: No 00:19:02.132 Endurance group ID: 1 00:19:02.132 Number of LBA Formats: 8 00:19:02.132 Current LBA Format: LBA Format #04 00:19:02.132 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:02.132 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:02.132 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:02.132 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:02.132 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:02.132 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:02.132 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:02.132 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:02.132 00:19:02.132 Get Feature FDP: 00:19:02.132 ================ 00:19:02.132 Enabled: Yes 00:19:02.132 FDP configuration index: 0 00:19:02.132 00:19:02.132 FDP configurations log page 00:19:02.132 =========================== 00:19:02.132 Number of FDP configurations: 1 00:19:02.132 Version: 0 00:19:02.132 Size: 112 00:19:02.132 FDP Configuration Descriptor: 0 00:19:02.132 Descriptor Size: 96 00:19:02.132 Reclaim Group Identifier format: 2 00:19:02.132 FDP Volatile Write Cache: Not Present 00:19:02.132 FDP Configuration: Valid 00:19:02.132 Vendor Specific Size: 0 00:19:02.132 Number of Reclaim Groups: 2 00:19:02.132 Number of Recalim Unit Handles: 8 00:19:02.132 Max Placement Identifiers: 128 00:19:02.132 Number of Namespaces Suppprted: 256 00:19:02.132 Reclaim unit Nominal Size: 6000000 bytes 00:19:02.132 Estimated Reclaim Unit Time Limit: Not Reported 00:19:02.132 RUH Desc #000: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #001: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #002: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #003: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #004: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #005: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #006: RUH Type: Initially Isolated 00:19:02.132 RUH Desc #007: RUH Type: Initially Isolated 00:19:02.132 00:19:02.132 FDP reclaim unit handle usage log page 00:19:02.132 ====================================== 00:19:02.132 Number of Reclaim Unit Handles: 8 00:19:02.132 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:02.132 RUH Usage Desc #001: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #002: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #003: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #004: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #005: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #006: RUH Attributes: Unused 00:19:02.132 RUH Usage Desc #007: RUH Attributes: Unused 00:19:02.132 00:19:02.133 FDP statistics log page 00:19:02.133 ======================= 00:19:02.133 Host bytes with metadata written: 429826048 00:19:02.133 Media bytes with metadata written: 429871104 00:19:02.133 Media bytes erased: 0 00:19:02.133 00:19:02.133 FDP events log page 00:19:02.133 =================== 00:19:02.133 Number of FDP events: 0 00:19:02.133 00:19:02.133 NVM Specific Namespace Data 00:19:02.133 =========================== 00:19:02.133 Logical Block Storage Tag Mask: 0 00:19:02.133 Protection Information Capabilities: 00:19:02.133 16b Guard Protection Information Storage Tag Support: No 00:19:02.133 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:02.133 Storage Tag Check Read Support: No 00:19:02.133 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:02.133 ************************************ 00:19:02.133 END TEST nvme_identify 00:19:02.133 ************************************ 00:19:02.133 00:19:02.133 real 0m1.205s 00:19:02.133 user 0m0.448s 00:19:02.133 sys 0m0.532s 00:19:02.133 04:41:09 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.133 04:41:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:02.133 04:41:09 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:02.133 04:41:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:02.133 04:41:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.133 04:41:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.133 ************************************ 00:19:02.133 START TEST nvme_perf 00:19:02.133 ************************************ 00:19:02.133 04:41:09 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:19:02.133 04:41:09 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:03.511 Initializing NVMe Controllers 00:19:03.511 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:03.511 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:03.511 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:03.511 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:03.511 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:03.511 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:03.511 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:03.511 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:03.511 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:03.511 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:03.511 Initialization complete. Launching workers. 00:19:03.511 ======================================================== 00:19:03.511 Latency(us) 00:19:03.511 Device Information : IOPS MiB/s Average min max 00:19:03.511 PCIE (0000:00:10.0) NSID 1 from core 0: 18636.45 218.40 6878.91 5514.91 33944.89 00:19:03.511 PCIE (0000:00:11.0) NSID 1 from core 0: 18636.45 218.40 6870.24 5620.52 32175.90 00:19:03.511 PCIE (0000:00:13.0) NSID 1 from core 0: 18636.45 218.40 6860.58 5560.48 30835.63 00:19:03.511 PCIE (0000:00:12.0) NSID 1 from core 0: 18636.45 218.40 6850.65 5572.35 29096.18 00:19:03.511 PCIE (0000:00:12.0) NSID 2 from core 0: 18636.45 218.40 6839.99 5566.28 27319.83 00:19:03.511 PCIE (0000:00:12.0) NSID 3 from core 0: 18700.28 219.14 6806.65 5560.97 22157.94 00:19:03.511 ======================================================== 00:19:03.511 Total : 111882.53 1311.12 6851.14 5514.91 33944.89 00:19:03.511 00:19:03.511 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:03.511 ================================================================================= 00:19:03.511 1.00000% : 5696.591us 00:19:03.511 10.00000% : 5898.240us 00:19:03.511 25.00000% : 6125.095us 00:19:03.511 50.00000% : 6427.569us 00:19:03.511 75.00000% : 6755.249us 00:19:03.511 90.00000% : 8418.855us 00:19:03.511 95.00000% : 9477.514us 00:19:03.511 98.00000% : 10989.883us 00:19:03.511 99.00000% : 12300.603us 00:19:03.511 99.50000% : 28835.840us 00:19:03.511 99.90000% : 33473.772us 00:19:03.511 99.99000% : 34078.720us 00:19:03.511 99.99900% : 34078.720us 00:19:03.511 99.99990% : 34078.720us 00:19:03.511 99.99999% : 34078.720us 00:19:03.511 00:19:03.511 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:03.511 ================================================================================= 00:19:03.511 1.00000% : 5772.209us 00:19:03.511 10.00000% : 5973.858us 00:19:03.511 25.00000% : 6150.302us 00:19:03.511 50.00000% : 6402.363us 00:19:03.511 75.00000% : 6704.837us 00:19:03.511 90.00000% : 8469.268us 00:19:03.511 95.00000% : 9477.514us 00:19:03.511 98.00000% : 11292.357us 00:19:03.511 99.00000% : 12300.603us 00:19:03.511 99.50000% : 27222.646us 00:19:03.511 99.90000% : 31860.578us 00:19:03.511 99.99000% : 32263.877us 00:19:03.511 99.99900% : 32263.877us 00:19:03.511 99.99990% : 32263.877us 00:19:03.511 99.99999% : 32263.877us 00:19:03.511 00:19:03.511 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:03.511 ================================================================================= 00:19:03.511 1.00000% : 5772.209us 00:19:03.511 10.00000% : 5948.652us 00:19:03.511 25.00000% : 6150.302us 00:19:03.511 50.00000% : 6402.363us 00:19:03.511 75.00000% : 6704.837us 00:19:03.511 90.00000% : 8469.268us 00:19:03.511 95.00000% : 9477.514us 00:19:03.511 98.00000% : 11191.532us 00:19:03.511 99.00000% : 12351.015us 00:19:03.511 99.50000% : 26012.751us 00:19:03.511 99.90000% : 30449.034us 00:19:03.511 99.99000% : 30852.332us 00:19:03.511 99.99900% : 30852.332us 00:19:03.511 99.99990% : 30852.332us 00:19:03.511 99.99999% : 30852.332us 00:19:03.511 00:19:03.511 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:03.511 ================================================================================= 00:19:03.511 1.00000% : 5772.209us 00:19:03.511 10.00000% : 5948.652us 00:19:03.511 25.00000% : 6150.302us 00:19:03.511 50.00000% : 6402.363us 00:19:03.511 75.00000% : 6704.837us 00:19:03.511 90.00000% : 8418.855us 00:19:03.511 95.00000% : 9578.338us 00:19:03.511 98.00000% : 10989.883us 00:19:03.512 99.00000% : 12451.840us 00:19:03.512 99.50000% : 24097.083us 00:19:03.512 99.90000% : 28835.840us 00:19:03.512 99.99000% : 29239.138us 00:19:03.512 99.99900% : 29239.138us 00:19:03.512 99.99990% : 29239.138us 00:19:03.512 99.99999% : 29239.138us 00:19:03.512 00:19:03.512 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:03.512 ================================================================================= 00:19:03.512 1.00000% : 5772.209us 00:19:03.512 10.00000% : 5973.858us 00:19:03.512 25.00000% : 6150.302us 00:19:03.512 50.00000% : 6402.363us 00:19:03.512 75.00000% : 6704.837us 00:19:03.512 90.00000% : 8368.443us 00:19:03.512 95.00000% : 9527.926us 00:19:03.512 98.00000% : 10939.471us 00:19:03.512 99.00000% : 12905.551us 00:19:03.512 99.50000% : 22181.415us 00:19:03.512 99.90000% : 27020.997us 00:19:03.512 99.99000% : 27424.295us 00:19:03.512 99.99900% : 27424.295us 00:19:03.512 99.99990% : 27424.295us 00:19:03.512 99.99999% : 27424.295us 00:19:03.512 00:19:03.512 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:03.512 ================================================================================= 00:19:03.512 1.00000% : 5772.209us 00:19:03.512 10.00000% : 5948.652us 00:19:03.512 25.00000% : 6150.302us 00:19:03.512 50.00000% : 6402.363us 00:19:03.512 75.00000% : 6704.837us 00:19:03.512 90.00000% : 8418.855us 00:19:03.512 95.00000% : 9578.338us 00:19:03.512 98.00000% : 10889.058us 00:19:03.512 99.00000% : 13006.375us 00:19:03.512 99.50000% : 17039.360us 00:19:03.512 99.90000% : 21778.117us 00:19:03.512 99.99000% : 22181.415us 00:19:03.512 99.99900% : 22181.415us 00:19:03.512 99.99990% : 22181.415us 00:19:03.512 99.99999% : 22181.415us 00:19:03.512 00:19:03.512 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:03.512 ============================================================================== 00:19:03.512 Range in us Cumulative IO count 00:19:03.512 5494.942 - 5520.148: 0.0054% ( 1) 00:19:03.512 5520.148 - 5545.354: 0.0642% ( 11) 00:19:03.512 5545.354 - 5570.560: 0.1338% ( 13) 00:19:03.512 5570.560 - 5595.766: 0.2247% ( 17) 00:19:03.512 5595.766 - 5620.972: 0.3371% ( 21) 00:19:03.512 5620.972 - 5646.178: 0.4976% ( 30) 00:19:03.512 5646.178 - 5671.385: 0.7545% ( 48) 00:19:03.512 5671.385 - 5696.591: 1.1612% ( 76) 00:19:03.512 5696.591 - 5721.797: 1.8140% ( 122) 00:19:03.512 5721.797 - 5747.003: 2.6916% ( 164) 00:19:03.512 5747.003 - 5772.209: 3.7243% ( 193) 00:19:03.512 5772.209 - 5797.415: 4.7999% ( 201) 00:19:03.512 5797.415 - 5822.622: 6.0841% ( 240) 00:19:03.512 5822.622 - 5847.828: 7.4754% ( 260) 00:19:03.512 5847.828 - 5873.034: 8.9523% ( 276) 00:19:03.512 5873.034 - 5898.240: 10.5415% ( 297) 00:19:03.512 5898.240 - 5923.446: 12.1789% ( 306) 00:19:03.512 5923.446 - 5948.652: 13.9394% ( 329) 00:19:03.512 5948.652 - 5973.858: 15.6464% ( 319) 00:19:03.512 5973.858 - 5999.065: 17.5246% ( 351) 00:19:03.512 5999.065 - 6024.271: 19.3975% ( 350) 00:19:03.512 6024.271 - 6049.477: 21.2222% ( 341) 00:19:03.512 6049.477 - 6074.683: 23.1699% ( 364) 00:19:03.512 6074.683 - 6099.889: 24.9679% ( 336) 00:19:03.512 6099.889 - 6125.095: 26.9959% ( 379) 00:19:03.512 6125.095 - 6150.302: 28.9116% ( 358) 00:19:03.512 6150.302 - 6175.508: 30.8005% ( 353) 00:19:03.512 6175.508 - 6200.714: 32.8874% ( 390) 00:19:03.512 6200.714 - 6225.920: 34.7924% ( 356) 00:19:03.512 6225.920 - 6251.126: 36.9274% ( 399) 00:19:03.512 6251.126 - 6276.332: 38.8699% ( 363) 00:19:03.512 6276.332 - 6301.538: 40.8765% ( 375) 00:19:03.512 6301.538 - 6326.745: 42.9580% ( 389) 00:19:03.512 6326.745 - 6351.951: 44.9433% ( 371) 00:19:03.512 6351.951 - 6377.157: 47.0034% ( 385) 00:19:03.512 6377.157 - 6402.363: 49.0422% ( 381) 00:19:03.512 6402.363 - 6427.569: 51.1291% ( 390) 00:19:03.512 6427.569 - 6452.775: 53.1839% ( 384) 00:19:03.512 6452.775 - 6503.188: 57.4058% ( 789) 00:19:03.512 6503.188 - 6553.600: 61.4726% ( 760) 00:19:03.512 6553.600 - 6604.012: 65.5554% ( 763) 00:19:03.512 6604.012 - 6654.425: 69.3279% ( 705) 00:19:03.512 6654.425 - 6704.837: 72.7633% ( 642) 00:19:03.512 6704.837 - 6755.249: 75.6956% ( 548) 00:19:03.512 6755.249 - 6805.662: 78.0126% ( 433) 00:19:03.512 6805.662 - 6856.074: 79.8748% ( 348) 00:19:03.512 6856.074 - 6906.486: 81.1697% ( 242) 00:19:03.512 6906.486 - 6956.898: 82.1490% ( 183) 00:19:03.512 6956.898 - 7007.311: 82.9570% ( 151) 00:19:03.512 7007.311 - 7057.723: 83.5616% ( 113) 00:19:03.512 7057.723 - 7108.135: 84.0914% ( 99) 00:19:03.512 7108.135 - 7158.548: 84.5302% ( 82) 00:19:03.512 7158.548 - 7208.960: 84.9583% ( 80) 00:19:03.512 7208.960 - 7259.372: 85.3168% ( 67) 00:19:03.512 7259.372 - 7309.785: 85.6753% ( 67) 00:19:03.512 7309.785 - 7360.197: 86.0071% ( 62) 00:19:03.512 7360.197 - 7410.609: 86.2479% ( 45) 00:19:03.512 7410.609 - 7461.022: 86.4565% ( 39) 00:19:03.512 7461.022 - 7511.434: 86.6759% ( 41) 00:19:03.512 7511.434 - 7561.846: 86.8739% ( 37) 00:19:03.512 7561.846 - 7612.258: 87.0131% ( 26) 00:19:03.512 7612.258 - 7662.671: 87.1843% ( 32) 00:19:03.512 7662.671 - 7713.083: 87.3716% ( 35) 00:19:03.512 7713.083 - 7763.495: 87.5535% ( 34) 00:19:03.512 7763.495 - 7813.908: 87.8104% ( 48) 00:19:03.512 7813.908 - 7864.320: 88.0405% ( 43) 00:19:03.512 7864.320 - 7914.732: 88.2331% ( 36) 00:19:03.512 7914.732 - 7965.145: 88.4632% ( 43) 00:19:03.512 7965.145 - 8015.557: 88.6612% ( 37) 00:19:03.512 8015.557 - 8065.969: 88.8485% ( 35) 00:19:03.512 8065.969 - 8116.382: 89.0464% ( 37) 00:19:03.512 8116.382 - 8166.794: 89.2123% ( 31) 00:19:03.512 8166.794 - 8217.206: 89.4157% ( 38) 00:19:03.512 8217.206 - 8267.618: 89.5815% ( 31) 00:19:03.512 8267.618 - 8318.031: 89.7688% ( 35) 00:19:03.512 8318.031 - 8368.443: 89.9561% ( 35) 00:19:03.512 8368.443 - 8418.855: 90.1434% ( 35) 00:19:03.512 8418.855 - 8469.268: 90.3093% ( 31) 00:19:03.512 8469.268 - 8519.680: 90.4912% ( 34) 00:19:03.512 8519.680 - 8570.092: 90.7106% ( 41) 00:19:03.512 8570.092 - 8620.505: 90.9193% ( 39) 00:19:03.512 8620.505 - 8670.917: 91.1655% ( 46) 00:19:03.512 8670.917 - 8721.329: 91.4277% ( 49) 00:19:03.512 8721.329 - 8771.742: 91.6792% ( 47) 00:19:03.512 8771.742 - 8822.154: 91.9146% ( 44) 00:19:03.512 8822.154 - 8872.566: 92.1661% ( 47) 00:19:03.512 8872.566 - 8922.978: 92.4336% ( 50) 00:19:03.512 8922.978 - 8973.391: 92.6530% ( 41) 00:19:03.512 8973.391 - 9023.803: 92.8831% ( 43) 00:19:03.512 9023.803 - 9074.215: 93.1186% ( 44) 00:19:03.512 9074.215 - 9124.628: 93.3487% ( 43) 00:19:03.512 9124.628 - 9175.040: 93.5895% ( 45) 00:19:03.512 9175.040 - 9225.452: 93.8303% ( 45) 00:19:03.512 9225.452 - 9275.865: 94.1032% ( 51) 00:19:03.512 9275.865 - 9326.277: 94.3600% ( 48) 00:19:03.512 9326.277 - 9376.689: 94.5794% ( 41) 00:19:03.512 9376.689 - 9427.102: 94.8470% ( 50) 00:19:03.512 9427.102 - 9477.514: 95.0771% ( 43) 00:19:03.512 9477.514 - 9527.926: 95.3071% ( 43) 00:19:03.512 9527.926 - 9578.338: 95.4944% ( 35) 00:19:03.512 9578.338 - 9628.751: 95.6657% ( 32) 00:19:03.512 9628.751 - 9679.163: 95.8530% ( 35) 00:19:03.512 9679.163 - 9729.575: 96.0081% ( 29) 00:19:03.512 9729.575 - 9779.988: 96.1633% ( 29) 00:19:03.512 9779.988 - 9830.400: 96.2917% ( 24) 00:19:03.512 9830.400 - 9880.812: 96.4148% ( 23) 00:19:03.512 9880.812 - 9931.225: 96.4897% ( 14) 00:19:03.512 9931.225 - 9981.637: 96.5593% ( 13) 00:19:03.512 9981.637 - 10032.049: 96.6449% ( 16) 00:19:03.512 10032.049 - 10082.462: 96.7252% ( 15) 00:19:03.512 10082.462 - 10132.874: 96.8108% ( 16) 00:19:03.512 10132.874 - 10183.286: 96.8964% ( 16) 00:19:03.512 10183.286 - 10233.698: 96.9927% ( 18) 00:19:03.513 10233.698 - 10284.111: 97.0944% ( 19) 00:19:03.513 10284.111 - 10334.523: 97.1854% ( 17) 00:19:03.513 10334.523 - 10384.935: 97.2870% ( 19) 00:19:03.513 10384.935 - 10435.348: 97.3512% ( 12) 00:19:03.513 10435.348 - 10485.760: 97.4048% ( 10) 00:19:03.513 10485.760 - 10536.172: 97.4797% ( 14) 00:19:03.513 10536.172 - 10586.585: 97.5332% ( 10) 00:19:03.513 10586.585 - 10636.997: 97.6134% ( 15) 00:19:03.513 10636.997 - 10687.409: 97.6670% ( 10) 00:19:03.513 10687.409 - 10737.822: 97.7258% ( 11) 00:19:03.513 10737.822 - 10788.234: 97.7954% ( 13) 00:19:03.513 10788.234 - 10838.646: 97.8542% ( 11) 00:19:03.513 10838.646 - 10889.058: 97.9131% ( 11) 00:19:03.513 10889.058 - 10939.471: 97.9613% ( 9) 00:19:03.513 10939.471 - 10989.883: 98.0308% ( 13) 00:19:03.513 10989.883 - 11040.295: 98.0790% ( 9) 00:19:03.513 11040.295 - 11090.708: 98.1271% ( 9) 00:19:03.513 11090.708 - 11141.120: 98.1592% ( 6) 00:19:03.513 11141.120 - 11191.532: 98.1914% ( 6) 00:19:03.513 11191.532 - 11241.945: 98.2342% ( 8) 00:19:03.513 11241.945 - 11292.357: 98.2663% ( 6) 00:19:03.513 11292.357 - 11342.769: 98.3251% ( 11) 00:19:03.513 11342.769 - 11393.182: 98.3572% ( 6) 00:19:03.513 11393.182 - 11443.594: 98.4054% ( 9) 00:19:03.513 11443.594 - 11494.006: 98.4375% ( 6) 00:19:03.513 11494.006 - 11544.418: 98.4803% ( 8) 00:19:03.513 11544.418 - 11594.831: 98.5124% ( 6) 00:19:03.513 11594.831 - 11645.243: 98.5552% ( 8) 00:19:03.513 11645.243 - 11695.655: 98.5820% ( 5) 00:19:03.513 11695.655 - 11746.068: 98.6301% ( 9) 00:19:03.513 11746.068 - 11796.480: 98.6622% ( 6) 00:19:03.513 11796.480 - 11846.892: 98.6890% ( 5) 00:19:03.513 11846.892 - 11897.305: 98.7211% ( 6) 00:19:03.513 11897.305 - 11947.717: 98.7639% ( 8) 00:19:03.513 11947.717 - 11998.129: 98.8014% ( 7) 00:19:03.513 11998.129 - 12048.542: 98.8442% ( 8) 00:19:03.513 12048.542 - 12098.954: 98.9030% ( 11) 00:19:03.513 12098.954 - 12149.366: 98.9351% ( 6) 00:19:03.513 12149.366 - 12199.778: 98.9565% ( 4) 00:19:03.513 12199.778 - 12250.191: 98.9780% ( 4) 00:19:03.513 12250.191 - 12300.603: 99.0101% ( 6) 00:19:03.513 12300.603 - 12351.015: 99.0315% ( 4) 00:19:03.513 12351.015 - 12401.428: 99.0529% ( 4) 00:19:03.513 12401.428 - 12451.840: 99.0636% ( 2) 00:19:03.513 12451.840 - 12502.252: 99.0743% ( 2) 00:19:03.513 12502.252 - 12552.665: 99.0796% ( 1) 00:19:03.513 12552.665 - 12603.077: 99.0957% ( 3) 00:19:03.513 12603.077 - 12653.489: 99.1010% ( 1) 00:19:03.513 12653.489 - 12703.902: 99.1117% ( 2) 00:19:03.513 12703.902 - 12754.314: 99.1171% ( 1) 00:19:03.513 12754.314 - 12804.726: 99.1278% ( 2) 00:19:03.513 12804.726 - 12855.138: 99.1331% ( 1) 00:19:03.513 12855.138 - 12905.551: 99.1492% ( 3) 00:19:03.513 12905.551 - 13006.375: 99.1652% ( 3) 00:19:03.513 13006.375 - 13107.200: 99.1866% ( 4) 00:19:03.513 13107.200 - 13208.025: 99.1973% ( 2) 00:19:03.513 13208.025 - 13308.849: 99.2134% ( 3) 00:19:03.513 13308.849 - 13409.674: 99.2348% ( 4) 00:19:03.513 13409.674 - 13510.498: 99.2562% ( 4) 00:19:03.513 13510.498 - 13611.323: 99.2776% ( 4) 00:19:03.513 13611.323 - 13712.148: 99.2937% ( 3) 00:19:03.513 13712.148 - 13812.972: 99.3097% ( 3) 00:19:03.513 13812.972 - 13913.797: 99.3151% ( 1) 00:19:03.513 27827.594 - 28029.243: 99.3418% ( 5) 00:19:03.513 28029.243 - 28230.892: 99.3846% ( 8) 00:19:03.513 28230.892 - 28432.542: 99.4274% ( 8) 00:19:03.513 28432.542 - 28634.191: 99.4702% ( 8) 00:19:03.513 28634.191 - 28835.840: 99.5131% ( 8) 00:19:03.513 28835.840 - 29037.489: 99.5505% ( 7) 00:19:03.513 29037.489 - 29239.138: 99.5933% ( 8) 00:19:03.513 29239.138 - 29440.788: 99.6361% ( 8) 00:19:03.513 29440.788 - 29642.437: 99.6575% ( 4) 00:19:03.513 32263.877 - 32465.526: 99.6843% ( 5) 00:19:03.513 32465.526 - 32667.175: 99.7271% ( 8) 00:19:03.513 32667.175 - 32868.825: 99.7753% ( 9) 00:19:03.513 32868.825 - 33070.474: 99.8181% ( 8) 00:19:03.513 33070.474 - 33272.123: 99.8555% ( 7) 00:19:03.513 33272.123 - 33473.772: 99.9037% ( 9) 00:19:03.513 33473.772 - 33675.422: 99.9411% ( 7) 00:19:03.513 33675.422 - 33877.071: 99.9839% ( 8) 00:19:03.513 33877.071 - 34078.720: 100.0000% ( 3) 00:19:03.513 00:19:03.513 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:03.513 ============================================================================== 00:19:03.513 Range in us Cumulative IO count 00:19:03.513 5595.766 - 5620.972: 0.0107% ( 2) 00:19:03.513 5620.972 - 5646.178: 0.0428% ( 6) 00:19:03.513 5646.178 - 5671.385: 0.1177% ( 14) 00:19:03.513 5671.385 - 5696.591: 0.3746% ( 48) 00:19:03.513 5696.591 - 5721.797: 0.5565% ( 34) 00:19:03.513 5721.797 - 5747.003: 0.7973% ( 45) 00:19:03.513 5747.003 - 5772.209: 1.1772% ( 71) 00:19:03.513 5772.209 - 5797.415: 1.7337% ( 104) 00:19:03.513 5797.415 - 5822.622: 2.7451% ( 189) 00:19:03.513 5822.622 - 5847.828: 3.8313% ( 203) 00:19:03.513 5847.828 - 5873.034: 5.0567% ( 229) 00:19:03.513 5873.034 - 5898.240: 6.5818% ( 285) 00:19:03.513 5898.240 - 5923.446: 8.1924% ( 301) 00:19:03.513 5923.446 - 5948.652: 9.7656% ( 294) 00:19:03.513 5948.652 - 5973.858: 11.5582% ( 335) 00:19:03.513 5973.858 - 5999.065: 13.6398% ( 389) 00:19:03.513 5999.065 - 6024.271: 15.7855% ( 401) 00:19:03.513 6024.271 - 6049.477: 18.0437% ( 422) 00:19:03.513 6049.477 - 6074.683: 20.1680% ( 397) 00:19:03.513 6074.683 - 6099.889: 22.3031% ( 399) 00:19:03.513 6099.889 - 6125.095: 24.4917% ( 409) 00:19:03.513 6125.095 - 6150.302: 26.6160% ( 397) 00:19:03.513 6150.302 - 6175.508: 28.8795% ( 423) 00:19:03.513 6175.508 - 6200.714: 31.2500% ( 443) 00:19:03.513 6200.714 - 6225.920: 33.6152% ( 442) 00:19:03.513 6225.920 - 6251.126: 36.0178% ( 449) 00:19:03.513 6251.126 - 6276.332: 38.3776% ( 441) 00:19:03.513 6276.332 - 6301.538: 40.7802% ( 449) 00:19:03.513 6301.538 - 6326.745: 43.2256% ( 457) 00:19:03.513 6326.745 - 6351.951: 45.6389% ( 451) 00:19:03.513 6351.951 - 6377.157: 48.0094% ( 443) 00:19:03.513 6377.157 - 6402.363: 50.4602% ( 458) 00:19:03.513 6402.363 - 6427.569: 52.9056% ( 457) 00:19:03.513 6427.569 - 6452.775: 55.2975% ( 447) 00:19:03.513 6452.775 - 6503.188: 60.1134% ( 900) 00:19:03.513 6503.188 - 6553.600: 64.7153% ( 860) 00:19:03.513 6553.600 - 6604.012: 68.9961% ( 800) 00:19:03.513 6604.012 - 6654.425: 72.6348% ( 680) 00:19:03.513 6654.425 - 6704.837: 75.6421% ( 562) 00:19:03.513 6704.837 - 6755.249: 77.9645% ( 434) 00:19:03.513 6755.249 - 6805.662: 79.7678% ( 337) 00:19:03.513 6805.662 - 6856.074: 81.0627% ( 242) 00:19:03.513 6856.074 - 6906.486: 82.0312% ( 181) 00:19:03.513 6906.486 - 6956.898: 82.7590% ( 136) 00:19:03.513 6956.898 - 7007.311: 83.3101% ( 103) 00:19:03.513 7007.311 - 7057.723: 83.8720% ( 105) 00:19:03.513 7057.723 - 7108.135: 84.3429% ( 88) 00:19:03.513 7108.135 - 7158.548: 84.7549% ( 77) 00:19:03.513 7158.548 - 7208.960: 85.1295% ( 70) 00:19:03.513 7208.960 - 7259.372: 85.4773% ( 65) 00:19:03.513 7259.372 - 7309.785: 85.7930% ( 59) 00:19:03.513 7309.785 - 7360.197: 86.0445% ( 47) 00:19:03.513 7360.197 - 7410.609: 86.2960% ( 47) 00:19:03.513 7410.609 - 7461.022: 86.5261% ( 43) 00:19:03.513 7461.022 - 7511.434: 86.7080% ( 34) 00:19:03.513 7511.434 - 7561.846: 86.9114% ( 38) 00:19:03.513 7561.846 - 7612.258: 87.1147% ( 38) 00:19:03.513 7612.258 - 7662.671: 87.3288% ( 40) 00:19:03.513 7662.671 - 7713.083: 87.4893% ( 30) 00:19:03.513 7713.083 - 7763.495: 87.6766% ( 35) 00:19:03.513 7763.495 - 7813.908: 87.8692% ( 36) 00:19:03.513 7813.908 - 7864.320: 88.0244% ( 29) 00:19:03.513 7864.320 - 7914.732: 88.2170% ( 36) 00:19:03.513 7914.732 - 7965.145: 88.3936% ( 33) 00:19:03.513 7965.145 - 8015.557: 88.5702% ( 33) 00:19:03.513 8015.557 - 8065.969: 88.7254% ( 29) 00:19:03.513 8065.969 - 8116.382: 88.8699% ( 27) 00:19:03.513 8116.382 - 8166.794: 89.0304% ( 30) 00:19:03.513 8166.794 - 8217.206: 89.2070% ( 33) 00:19:03.513 8217.206 - 8267.618: 89.4050% ( 37) 00:19:03.513 8267.618 - 8318.031: 89.6083% ( 38) 00:19:03.513 8318.031 - 8368.443: 89.7902% ( 34) 00:19:03.513 8368.443 - 8418.855: 89.9882% ( 37) 00:19:03.513 8418.855 - 8469.268: 90.1862% ( 37) 00:19:03.513 8469.268 - 8519.680: 90.4003% ( 40) 00:19:03.513 8519.680 - 8570.092: 90.6196% ( 41) 00:19:03.513 8570.092 - 8620.505: 90.8711% ( 47) 00:19:03.513 8620.505 - 8670.917: 91.1012% ( 43) 00:19:03.513 8670.917 - 8721.329: 91.3688% ( 50) 00:19:03.513 8721.329 - 8771.742: 91.6256% ( 48) 00:19:03.513 8771.742 - 8822.154: 91.8450% ( 41) 00:19:03.513 8822.154 - 8872.566: 92.0858% ( 45) 00:19:03.513 8872.566 - 8922.978: 92.3427% ( 48) 00:19:03.513 8922.978 - 8973.391: 92.6102% ( 50) 00:19:03.513 8973.391 - 9023.803: 92.8938% ( 53) 00:19:03.513 9023.803 - 9074.215: 93.1614% ( 50) 00:19:03.513 9074.215 - 9124.628: 93.4396% ( 52) 00:19:03.513 9124.628 - 9175.040: 93.6965% ( 48) 00:19:03.513 9175.040 - 9225.452: 93.9373% ( 45) 00:19:03.513 9225.452 - 9275.865: 94.1727% ( 44) 00:19:03.513 9275.865 - 9326.277: 94.4135% ( 45) 00:19:03.513 9326.277 - 9376.689: 94.6597% ( 46) 00:19:03.513 9376.689 - 9427.102: 94.8737% ( 40) 00:19:03.513 9427.102 - 9477.514: 95.1092% ( 44) 00:19:03.513 9477.514 - 9527.926: 95.3232% ( 40) 00:19:03.513 9527.926 - 9578.338: 95.5051% ( 34) 00:19:03.513 9578.338 - 9628.751: 95.6871% ( 34) 00:19:03.513 9628.751 - 9679.163: 95.8637% ( 33) 00:19:03.513 9679.163 - 9729.575: 96.0295% ( 31) 00:19:03.513 9729.575 - 9779.988: 96.2008% ( 32) 00:19:03.514 9779.988 - 9830.400: 96.3345% ( 25) 00:19:03.514 9830.400 - 9880.812: 96.4897% ( 29) 00:19:03.514 9880.812 - 9931.225: 96.6021% ( 21) 00:19:03.514 9931.225 - 9981.637: 96.6824% ( 15) 00:19:03.514 9981.637 - 10032.049: 96.7573% ( 14) 00:19:03.514 10032.049 - 10082.462: 96.8643% ( 20) 00:19:03.514 10082.462 - 10132.874: 96.9713% ( 20) 00:19:03.514 10132.874 - 10183.286: 97.0783% ( 20) 00:19:03.514 10183.286 - 10233.698: 97.1479% ( 13) 00:19:03.514 10233.698 - 10284.111: 97.2228% ( 14) 00:19:03.514 10284.111 - 10334.523: 97.2817% ( 11) 00:19:03.514 10334.523 - 10384.935: 97.3245% ( 8) 00:19:03.514 10384.935 - 10435.348: 97.3726% ( 9) 00:19:03.514 10435.348 - 10485.760: 97.4262% ( 10) 00:19:03.514 10485.760 - 10536.172: 97.4904% ( 12) 00:19:03.514 10536.172 - 10586.585: 97.5599% ( 13) 00:19:03.514 10586.585 - 10636.997: 97.6081% ( 9) 00:19:03.514 10636.997 - 10687.409: 97.6562% ( 9) 00:19:03.514 10687.409 - 10737.822: 97.6991% ( 8) 00:19:03.514 10737.822 - 10788.234: 97.7419% ( 8) 00:19:03.514 10788.234 - 10838.646: 97.7579% ( 3) 00:19:03.514 10838.646 - 10889.058: 97.7793% ( 4) 00:19:03.514 10889.058 - 10939.471: 97.8007% ( 4) 00:19:03.514 10939.471 - 10989.883: 97.8168% ( 3) 00:19:03.514 10989.883 - 11040.295: 97.8328% ( 3) 00:19:03.514 11040.295 - 11090.708: 97.8489% ( 3) 00:19:03.514 11090.708 - 11141.120: 97.8703% ( 4) 00:19:03.514 11141.120 - 11191.532: 97.8970% ( 5) 00:19:03.514 11191.532 - 11241.945: 97.9399% ( 8) 00:19:03.514 11241.945 - 11292.357: 98.0041% ( 12) 00:19:03.514 11292.357 - 11342.769: 98.0629% ( 11) 00:19:03.514 11342.769 - 11393.182: 98.1378% ( 14) 00:19:03.514 11393.182 - 11443.594: 98.1807% ( 8) 00:19:03.514 11443.594 - 11494.006: 98.2235% ( 8) 00:19:03.514 11494.006 - 11544.418: 98.2716% ( 9) 00:19:03.514 11544.418 - 11594.831: 98.3251% ( 10) 00:19:03.514 11594.831 - 11645.243: 98.3786% ( 10) 00:19:03.514 11645.243 - 11695.655: 98.4321% ( 10) 00:19:03.514 11695.655 - 11746.068: 98.4857% ( 10) 00:19:03.514 11746.068 - 11796.480: 98.5392% ( 10) 00:19:03.514 11796.480 - 11846.892: 98.6034% ( 12) 00:19:03.514 11846.892 - 11897.305: 98.6569% ( 10) 00:19:03.514 11897.305 - 11947.717: 98.7158% ( 11) 00:19:03.514 11947.717 - 11998.129: 98.7746% ( 11) 00:19:03.514 11998.129 - 12048.542: 98.8281% ( 10) 00:19:03.514 12048.542 - 12098.954: 98.8709% ( 8) 00:19:03.514 12098.954 - 12149.366: 98.9030% ( 6) 00:19:03.514 12149.366 - 12199.778: 98.9458% ( 8) 00:19:03.514 12199.778 - 12250.191: 98.9887% ( 8) 00:19:03.514 12250.191 - 12300.603: 99.0261% ( 7) 00:19:03.514 12300.603 - 12351.015: 99.0689% ( 8) 00:19:03.514 12351.015 - 12401.428: 99.1117% ( 8) 00:19:03.514 12401.428 - 12451.840: 99.1545% ( 8) 00:19:03.514 12451.840 - 12502.252: 99.1973% ( 8) 00:19:03.514 12502.252 - 12552.665: 99.2134% ( 3) 00:19:03.514 12552.665 - 12603.077: 99.2295% ( 3) 00:19:03.514 12603.077 - 12653.489: 99.2509% ( 4) 00:19:03.514 12653.489 - 12703.902: 99.2723% ( 4) 00:19:03.514 12703.902 - 12754.314: 99.2937% ( 4) 00:19:03.514 12754.314 - 12804.726: 99.3097% ( 3) 00:19:03.514 12804.726 - 12855.138: 99.3151% ( 1) 00:19:03.514 26214.400 - 26416.049: 99.3472% ( 6) 00:19:03.514 26416.049 - 26617.698: 99.3953% ( 9) 00:19:03.514 26617.698 - 26819.348: 99.4381% ( 8) 00:19:03.514 26819.348 - 27020.997: 99.4863% ( 9) 00:19:03.514 27020.997 - 27222.646: 99.5345% ( 9) 00:19:03.514 27222.646 - 27424.295: 99.5773% ( 8) 00:19:03.514 27424.295 - 27625.945: 99.6254% ( 9) 00:19:03.514 27625.945 - 27827.594: 99.6575% ( 6) 00:19:03.514 30650.683 - 30852.332: 99.6950% ( 7) 00:19:03.514 30852.332 - 31053.982: 99.7432% ( 9) 00:19:03.514 31053.982 - 31255.631: 99.7860% ( 8) 00:19:03.514 31255.631 - 31457.280: 99.8341% ( 9) 00:19:03.514 31457.280 - 31658.929: 99.8823% ( 9) 00:19:03.514 31658.929 - 31860.578: 99.9251% ( 8) 00:19:03.514 31860.578 - 32062.228: 99.9732% ( 9) 00:19:03.514 32062.228 - 32263.877: 100.0000% ( 5) 00:19:03.514 00:19:03.514 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:03.514 ============================================================================== 00:19:03.514 Range in us Cumulative IO count 00:19:03.514 5545.354 - 5570.560: 0.0107% ( 2) 00:19:03.514 5570.560 - 5595.766: 0.0535% ( 8) 00:19:03.514 5595.766 - 5620.972: 0.1124% ( 11) 00:19:03.514 5620.972 - 5646.178: 0.1766% ( 12) 00:19:03.514 5646.178 - 5671.385: 0.2890% ( 21) 00:19:03.514 5671.385 - 5696.591: 0.3906% ( 19) 00:19:03.514 5696.591 - 5721.797: 0.5030% ( 21) 00:19:03.514 5721.797 - 5747.003: 0.7331% ( 43) 00:19:03.514 5747.003 - 5772.209: 1.0970% ( 68) 00:19:03.514 5772.209 - 5797.415: 1.6214% ( 98) 00:19:03.514 5797.415 - 5822.622: 2.6006% ( 183) 00:19:03.514 5822.622 - 5847.828: 3.7511% ( 215) 00:19:03.514 5847.828 - 5873.034: 5.1584% ( 263) 00:19:03.514 5873.034 - 5898.240: 6.6834% ( 285) 00:19:03.514 5898.240 - 5923.446: 8.3744% ( 316) 00:19:03.514 5923.446 - 5948.652: 10.1295% ( 328) 00:19:03.514 5948.652 - 5973.858: 11.9863% ( 347) 00:19:03.514 5973.858 - 5999.065: 13.9341% ( 364) 00:19:03.514 5999.065 - 6024.271: 15.9193% ( 371) 00:19:03.514 6024.271 - 6049.477: 17.9955% ( 388) 00:19:03.514 6049.477 - 6074.683: 20.1252% ( 398) 00:19:03.514 6074.683 - 6099.889: 22.3191% ( 410) 00:19:03.514 6099.889 - 6125.095: 24.5345% ( 414) 00:19:03.514 6125.095 - 6150.302: 26.7551% ( 415) 00:19:03.514 6150.302 - 6175.508: 28.9651% ( 413) 00:19:03.514 6175.508 - 6200.714: 31.2232% ( 422) 00:19:03.514 6200.714 - 6225.920: 33.6259% ( 449) 00:19:03.514 6225.920 - 6251.126: 35.9643% ( 437) 00:19:03.514 6251.126 - 6276.332: 38.4097% ( 457) 00:19:03.514 6276.332 - 6301.538: 40.8016% ( 447) 00:19:03.514 6301.538 - 6326.745: 43.1560% ( 440) 00:19:03.514 6326.745 - 6351.951: 45.5640% ( 450) 00:19:03.514 6351.951 - 6377.157: 48.0041% ( 456) 00:19:03.514 6377.157 - 6402.363: 50.4281% ( 453) 00:19:03.514 6402.363 - 6427.569: 52.8414% ( 451) 00:19:03.514 6427.569 - 6452.775: 55.2761% ( 455) 00:19:03.514 6452.775 - 6503.188: 59.9529% ( 874) 00:19:03.514 6503.188 - 6553.600: 64.7100% ( 889) 00:19:03.514 6553.600 - 6604.012: 69.0176% ( 805) 00:19:03.514 6604.012 - 6654.425: 72.6830% ( 685) 00:19:03.514 6654.425 - 6704.837: 75.6368% ( 552) 00:19:03.514 6704.837 - 6755.249: 77.9912% ( 440) 00:19:03.514 6755.249 - 6805.662: 79.7945% ( 337) 00:19:03.514 6805.662 - 6856.074: 81.0788% ( 240) 00:19:03.514 6856.074 - 6906.486: 82.1329% ( 197) 00:19:03.514 6906.486 - 6956.898: 82.9570% ( 154) 00:19:03.514 6956.898 - 7007.311: 83.5884% ( 118) 00:19:03.514 7007.311 - 7057.723: 84.1556% ( 106) 00:19:03.514 7057.723 - 7108.135: 84.6318% ( 89) 00:19:03.514 7108.135 - 7158.548: 85.0706% ( 82) 00:19:03.514 7158.548 - 7208.960: 85.4452% ( 70) 00:19:03.514 7208.960 - 7259.372: 85.7663% ( 60) 00:19:03.514 7259.372 - 7309.785: 86.0338% ( 50) 00:19:03.514 7309.785 - 7360.197: 86.2639% ( 43) 00:19:03.514 7360.197 - 7410.609: 86.4726% ( 39) 00:19:03.514 7410.609 - 7461.022: 86.6438% ( 32) 00:19:03.514 7461.022 - 7511.434: 86.8311% ( 35) 00:19:03.514 7511.434 - 7561.846: 86.9756% ( 27) 00:19:03.514 7561.846 - 7612.258: 87.1254% ( 28) 00:19:03.514 7612.258 - 7662.671: 87.3074% ( 34) 00:19:03.514 7662.671 - 7713.083: 87.4518% ( 27) 00:19:03.514 7713.083 - 7763.495: 87.6017% ( 28) 00:19:03.514 7763.495 - 7813.908: 87.7729% ( 32) 00:19:03.514 7813.908 - 7864.320: 87.9174% ( 27) 00:19:03.514 7864.320 - 7914.732: 88.0672% ( 28) 00:19:03.514 7914.732 - 7965.145: 88.1796% ( 21) 00:19:03.514 7965.145 - 8015.557: 88.3294% ( 28) 00:19:03.514 8015.557 - 8065.969: 88.4792% ( 28) 00:19:03.514 8065.969 - 8116.382: 88.6451% ( 31) 00:19:03.514 8116.382 - 8166.794: 88.8271% ( 34) 00:19:03.514 8166.794 - 8217.206: 89.0625% ( 44) 00:19:03.514 8217.206 - 8267.618: 89.2337% ( 32) 00:19:03.514 8267.618 - 8318.031: 89.4478% ( 40) 00:19:03.514 8318.031 - 8368.443: 89.6886% ( 45) 00:19:03.514 8368.443 - 8418.855: 89.9240% ( 44) 00:19:03.514 8418.855 - 8469.268: 90.2183% ( 55) 00:19:03.514 8469.268 - 8519.680: 90.4859% ( 50) 00:19:03.514 8519.680 - 8570.092: 90.7320% ( 46) 00:19:03.514 8570.092 - 8620.505: 90.9889% ( 48) 00:19:03.514 8620.505 - 8670.917: 91.2618% ( 51) 00:19:03.514 8670.917 - 8721.329: 91.5400% ( 52) 00:19:03.514 8721.329 - 8771.742: 91.8290% ( 54) 00:19:03.514 8771.742 - 8822.154: 92.0858% ( 48) 00:19:03.514 8822.154 - 8872.566: 92.3427% ( 48) 00:19:03.514 8872.566 - 8922.978: 92.6477% ( 57) 00:19:03.514 8922.978 - 8973.391: 92.9206% ( 51) 00:19:03.514 8973.391 - 9023.803: 93.1507% ( 43) 00:19:03.514 9023.803 - 9074.215: 93.3808% ( 43) 00:19:03.514 9074.215 - 9124.628: 93.6162% ( 44) 00:19:03.514 9124.628 - 9175.040: 93.8303% ( 40) 00:19:03.514 9175.040 - 9225.452: 94.0657% ( 44) 00:19:03.514 9225.452 - 9275.865: 94.2530% ( 35) 00:19:03.514 9275.865 - 9326.277: 94.4135% ( 30) 00:19:03.514 9326.277 - 9376.689: 94.6008% ( 35) 00:19:03.514 9376.689 - 9427.102: 94.8042% ( 38) 00:19:03.514 9427.102 - 9477.514: 95.0128% ( 39) 00:19:03.514 9477.514 - 9527.926: 95.1841% ( 32) 00:19:03.514 9527.926 - 9578.338: 95.3393% ( 29) 00:19:03.514 9578.338 - 9628.751: 95.4998% ( 30) 00:19:03.514 9628.751 - 9679.163: 95.6603% ( 30) 00:19:03.514 9679.163 - 9729.575: 95.8048% ( 27) 00:19:03.514 9729.575 - 9779.988: 95.9386% ( 25) 00:19:03.514 9779.988 - 9830.400: 96.0616% ( 23) 00:19:03.514 9830.400 - 9880.812: 96.1901% ( 24) 00:19:03.515 9880.812 - 9931.225: 96.3238% ( 25) 00:19:03.515 9931.225 - 9981.637: 96.4523% ( 24) 00:19:03.515 9981.637 - 10032.049: 96.5753% ( 23) 00:19:03.515 10032.049 - 10082.462: 96.7145% ( 26) 00:19:03.515 10082.462 - 10132.874: 96.8268% ( 21) 00:19:03.515 10132.874 - 10183.286: 96.9285% ( 19) 00:19:03.515 10183.286 - 10233.698: 97.0355% ( 20) 00:19:03.515 10233.698 - 10284.111: 97.1051% ( 13) 00:19:03.515 10284.111 - 10334.523: 97.1854% ( 15) 00:19:03.515 10334.523 - 10384.935: 97.2549% ( 13) 00:19:03.515 10384.935 - 10435.348: 97.3245% ( 13) 00:19:03.515 10435.348 - 10485.760: 97.4155% ( 17) 00:19:03.515 10485.760 - 10536.172: 97.4957% ( 15) 00:19:03.515 10536.172 - 10586.585: 97.5599% ( 12) 00:19:03.515 10586.585 - 10636.997: 97.6241% ( 12) 00:19:03.515 10636.997 - 10687.409: 97.6777% ( 10) 00:19:03.515 10687.409 - 10737.822: 97.7098% ( 6) 00:19:03.515 10737.822 - 10788.234: 97.7312% ( 4) 00:19:03.515 10788.234 - 10838.646: 97.7526% ( 4) 00:19:03.515 10838.646 - 10889.058: 97.8114% ( 11) 00:19:03.515 10889.058 - 10939.471: 97.8435% ( 6) 00:19:03.515 10939.471 - 10989.883: 97.8863% ( 8) 00:19:03.515 10989.883 - 11040.295: 97.9238% ( 7) 00:19:03.515 11040.295 - 11090.708: 97.9559% ( 6) 00:19:03.515 11090.708 - 11141.120: 97.9987% ( 8) 00:19:03.515 11141.120 - 11191.532: 98.0576% ( 11) 00:19:03.515 11191.532 - 11241.945: 98.1111% ( 10) 00:19:03.515 11241.945 - 11292.357: 98.1592% ( 9) 00:19:03.515 11292.357 - 11342.769: 98.2074% ( 9) 00:19:03.515 11342.769 - 11393.182: 98.2502% ( 8) 00:19:03.515 11393.182 - 11443.594: 98.2823% ( 6) 00:19:03.515 11443.594 - 11494.006: 98.3198% ( 7) 00:19:03.515 11494.006 - 11544.418: 98.3412% ( 4) 00:19:03.515 11544.418 - 11594.831: 98.3947% ( 10) 00:19:03.515 11594.831 - 11645.243: 98.4536% ( 11) 00:19:03.515 11645.243 - 11695.655: 98.4857% ( 6) 00:19:03.515 11695.655 - 11746.068: 98.5338% ( 9) 00:19:03.515 11746.068 - 11796.480: 98.5766% ( 8) 00:19:03.515 11796.480 - 11846.892: 98.6248% ( 9) 00:19:03.515 11846.892 - 11897.305: 98.6622% ( 7) 00:19:03.515 11897.305 - 11947.717: 98.6997% ( 7) 00:19:03.515 11947.717 - 11998.129: 98.7425% ( 8) 00:19:03.515 11998.129 - 12048.542: 98.7853% ( 8) 00:19:03.515 12048.542 - 12098.954: 98.8281% ( 8) 00:19:03.515 12098.954 - 12149.366: 98.8709% ( 8) 00:19:03.515 12149.366 - 12199.778: 98.9137% ( 8) 00:19:03.515 12199.778 - 12250.191: 98.9565% ( 8) 00:19:03.515 12250.191 - 12300.603: 98.9994% ( 8) 00:19:03.515 12300.603 - 12351.015: 99.0422% ( 8) 00:19:03.515 12351.015 - 12401.428: 99.0743% ( 6) 00:19:03.515 12401.428 - 12451.840: 99.0903% ( 3) 00:19:03.515 12451.840 - 12502.252: 99.0957% ( 1) 00:19:03.515 12502.252 - 12552.665: 99.1171% ( 4) 00:19:03.515 12552.665 - 12603.077: 99.1385% ( 4) 00:19:03.515 12603.077 - 12653.489: 99.1706% ( 6) 00:19:03.515 12653.489 - 12703.902: 99.1973% ( 5) 00:19:03.515 12703.902 - 12754.314: 99.2188% ( 4) 00:19:03.515 12754.314 - 12804.726: 99.2295% ( 2) 00:19:03.515 12804.726 - 12855.138: 99.2402% ( 2) 00:19:03.515 12855.138 - 12905.551: 99.2509% ( 2) 00:19:03.515 12905.551 - 13006.375: 99.2723% ( 4) 00:19:03.515 13006.375 - 13107.200: 99.2830% ( 2) 00:19:03.515 13107.200 - 13208.025: 99.3044% ( 4) 00:19:03.515 13208.025 - 13308.849: 99.3151% ( 2) 00:19:03.515 24903.680 - 25004.505: 99.3258% ( 2) 00:19:03.515 25004.505 - 25105.329: 99.3418% ( 3) 00:19:03.515 25105.329 - 25206.154: 99.3632% ( 4) 00:19:03.515 25206.154 - 25306.978: 99.3846% ( 4) 00:19:03.515 25306.978 - 25407.803: 99.4060% ( 4) 00:19:03.515 25407.803 - 25508.628: 99.4328% ( 5) 00:19:03.515 25508.628 - 25609.452: 99.4542% ( 4) 00:19:03.515 25609.452 - 25710.277: 99.4756% ( 4) 00:19:03.515 25710.277 - 25811.102: 99.4970% ( 4) 00:19:03.515 25811.102 - 26012.751: 99.5452% ( 9) 00:19:03.515 26012.751 - 26214.400: 99.5880% ( 8) 00:19:03.515 26214.400 - 26416.049: 99.6308% ( 8) 00:19:03.515 26416.049 - 26617.698: 99.6575% ( 5) 00:19:03.515 29239.138 - 29440.788: 99.6843% ( 5) 00:19:03.515 29440.788 - 29642.437: 99.7324% ( 9) 00:19:03.515 29642.437 - 29844.086: 99.7753% ( 8) 00:19:03.515 29844.086 - 30045.735: 99.8181% ( 8) 00:19:03.515 30045.735 - 30247.385: 99.8662% ( 9) 00:19:03.515 30247.385 - 30449.034: 99.9144% ( 9) 00:19:03.515 30449.034 - 30650.683: 99.9572% ( 8) 00:19:03.515 30650.683 - 30852.332: 100.0000% ( 8) 00:19:03.515 00:19:03.515 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:03.515 ============================================================================== 00:19:03.515 Range in us Cumulative IO count 00:19:03.515 5570.560 - 5595.766: 0.0161% ( 3) 00:19:03.515 5595.766 - 5620.972: 0.0535% ( 7) 00:19:03.515 5620.972 - 5646.178: 0.1070% ( 10) 00:19:03.515 5646.178 - 5671.385: 0.2087% ( 19) 00:19:03.515 5671.385 - 5696.591: 0.3050% ( 18) 00:19:03.515 5696.591 - 5721.797: 0.4976% ( 36) 00:19:03.515 5721.797 - 5747.003: 0.7598% ( 49) 00:19:03.515 5747.003 - 5772.209: 1.1237% ( 68) 00:19:03.515 5772.209 - 5797.415: 1.6535% ( 99) 00:19:03.515 5797.415 - 5822.622: 2.3598% ( 132) 00:19:03.515 5822.622 - 5847.828: 3.4942% ( 212) 00:19:03.515 5847.828 - 5873.034: 4.9069% ( 264) 00:19:03.515 5873.034 - 5898.240: 6.4801% ( 294) 00:19:03.515 5898.240 - 5923.446: 8.3851% ( 356) 00:19:03.515 5923.446 - 5948.652: 10.1455% ( 329) 00:19:03.515 5948.652 - 5973.858: 11.9114% ( 330) 00:19:03.515 5973.858 - 5999.065: 13.9073% ( 373) 00:19:03.515 5999.065 - 6024.271: 15.9461% ( 381) 00:19:03.515 6024.271 - 6049.477: 18.0437% ( 392) 00:19:03.515 6049.477 - 6074.683: 20.1894% ( 401) 00:19:03.515 6074.683 - 6099.889: 22.3084% ( 396) 00:19:03.515 6099.889 - 6125.095: 24.4114% ( 393) 00:19:03.515 6125.095 - 6150.302: 26.5946% ( 408) 00:19:03.515 6150.302 - 6175.508: 28.6869% ( 391) 00:19:03.515 6175.508 - 6200.714: 30.9503% ( 423) 00:19:03.515 6200.714 - 6225.920: 33.2085% ( 422) 00:19:03.515 6225.920 - 6251.126: 35.7074% ( 467) 00:19:03.515 6251.126 - 6276.332: 38.2331% ( 472) 00:19:03.515 6276.332 - 6301.538: 40.6250% ( 447) 00:19:03.515 6301.538 - 6326.745: 43.0169% ( 447) 00:19:03.515 6326.745 - 6351.951: 45.4409% ( 453) 00:19:03.515 6351.951 - 6377.157: 47.9720% ( 473) 00:19:03.515 6377.157 - 6402.363: 50.4495% ( 463) 00:19:03.515 6402.363 - 6427.569: 52.8093% ( 441) 00:19:03.515 6427.569 - 6452.775: 55.2654% ( 459) 00:19:03.515 6452.775 - 6503.188: 60.0760% ( 899) 00:19:03.515 6503.188 - 6553.600: 64.7367% ( 871) 00:19:03.515 6553.600 - 6604.012: 68.9640% ( 790) 00:19:03.515 6604.012 - 6654.425: 72.7044% ( 699) 00:19:03.515 6654.425 - 6704.837: 75.6528% ( 551) 00:19:03.515 6704.837 - 6755.249: 77.7986% ( 401) 00:19:03.515 6755.249 - 6805.662: 79.5698% ( 331) 00:19:03.515 6805.662 - 6856.074: 80.9503% ( 258) 00:19:03.515 6856.074 - 6906.486: 81.9831% ( 193) 00:19:03.515 6906.486 - 6956.898: 82.7804% ( 149) 00:19:03.515 6956.898 - 7007.311: 83.3583% ( 108) 00:19:03.515 7007.311 - 7057.723: 83.8720% ( 96) 00:19:03.515 7057.723 - 7108.135: 84.3215% ( 84) 00:19:03.515 7108.135 - 7158.548: 84.7870% ( 87) 00:19:03.515 7158.548 - 7208.960: 85.1670% ( 71) 00:19:03.515 7208.960 - 7259.372: 85.4292% ( 49) 00:19:03.515 7259.372 - 7309.785: 85.7449% ( 59) 00:19:03.515 7309.785 - 7360.197: 85.9482% ( 38) 00:19:03.515 7360.197 - 7410.609: 86.1943% ( 46) 00:19:03.515 7410.609 - 7461.022: 86.4244% ( 43) 00:19:03.515 7461.022 - 7511.434: 86.6599% ( 44) 00:19:03.515 7511.434 - 7561.846: 86.8793% ( 41) 00:19:03.515 7561.846 - 7612.258: 87.1094% ( 43) 00:19:03.515 7612.258 - 7662.671: 87.3074% ( 37) 00:19:03.515 7662.671 - 7713.083: 87.4679% ( 30) 00:19:03.515 7713.083 - 7763.495: 87.6284% ( 30) 00:19:03.515 7763.495 - 7813.908: 87.8478% ( 41) 00:19:03.515 7813.908 - 7864.320: 88.0244% ( 33) 00:19:03.515 7864.320 - 7914.732: 88.1796% ( 29) 00:19:03.515 7914.732 - 7965.145: 88.3562% ( 33) 00:19:03.515 7965.145 - 8015.557: 88.5756% ( 41) 00:19:03.515 8015.557 - 8065.969: 88.7628% ( 35) 00:19:03.515 8065.969 - 8116.382: 88.9662% ( 38) 00:19:03.515 8116.382 - 8166.794: 89.1535% ( 35) 00:19:03.515 8166.794 - 8217.206: 89.3354% ( 34) 00:19:03.515 8217.206 - 8267.618: 89.5387% ( 38) 00:19:03.515 8267.618 - 8318.031: 89.7581% ( 41) 00:19:03.515 8318.031 - 8368.443: 89.9668% ( 39) 00:19:03.515 8368.443 - 8418.855: 90.1916% ( 42) 00:19:03.516 8418.855 - 8469.268: 90.4110% ( 41) 00:19:03.516 8469.268 - 8519.680: 90.7053% ( 55) 00:19:03.516 8519.680 - 8570.092: 90.9354% ( 43) 00:19:03.516 8570.092 - 8620.505: 91.1655% ( 43) 00:19:03.516 8620.505 - 8670.917: 91.3688% ( 38) 00:19:03.516 8670.917 - 8721.329: 91.6363% ( 50) 00:19:03.516 8721.329 - 8771.742: 91.8450% ( 39) 00:19:03.516 8771.742 - 8822.154: 92.1019% ( 48) 00:19:03.516 8822.154 - 8872.566: 92.3480% ( 46) 00:19:03.516 8872.566 - 8922.978: 92.5407% ( 36) 00:19:03.516 8922.978 - 8973.391: 92.7333% ( 36) 00:19:03.516 8973.391 - 9023.803: 92.9741% ( 45) 00:19:03.516 9023.803 - 9074.215: 93.2310% ( 48) 00:19:03.516 9074.215 - 9124.628: 93.4610% ( 43) 00:19:03.516 9124.628 - 9175.040: 93.7179% ( 48) 00:19:03.516 9175.040 - 9225.452: 93.9533% ( 44) 00:19:03.516 9225.452 - 9275.865: 94.1620% ( 39) 00:19:03.516 9275.865 - 9326.277: 94.3333% ( 32) 00:19:03.516 9326.277 - 9376.689: 94.5098% ( 33) 00:19:03.516 9376.689 - 9427.102: 94.6704% ( 30) 00:19:03.516 9427.102 - 9477.514: 94.8363% ( 31) 00:19:03.516 9477.514 - 9527.926: 94.9968% ( 30) 00:19:03.516 9527.926 - 9578.338: 95.1573% ( 30) 00:19:03.516 9578.338 - 9628.751: 95.2857% ( 24) 00:19:03.516 9628.751 - 9679.163: 95.3928% ( 20) 00:19:03.516 9679.163 - 9729.575: 95.5158% ( 23) 00:19:03.516 9729.575 - 9779.988: 95.7031% ( 35) 00:19:03.516 9779.988 - 9830.400: 95.8583% ( 29) 00:19:03.516 9830.400 - 9880.812: 95.9974% ( 26) 00:19:03.516 9880.812 - 9931.225: 96.1152% ( 22) 00:19:03.516 9931.225 - 9981.637: 96.2168% ( 19) 00:19:03.516 9981.637 - 10032.049: 96.3345% ( 22) 00:19:03.516 10032.049 - 10082.462: 96.4309% ( 18) 00:19:03.516 10082.462 - 10132.874: 96.5379% ( 20) 00:19:03.516 10132.874 - 10183.286: 96.6396% ( 19) 00:19:03.516 10183.286 - 10233.698: 96.7466% ( 20) 00:19:03.516 10233.698 - 10284.111: 96.8482% ( 19) 00:19:03.516 10284.111 - 10334.523: 96.9499% ( 19) 00:19:03.516 10334.523 - 10384.935: 97.0516% ( 19) 00:19:03.516 10384.935 - 10435.348: 97.1586% ( 20) 00:19:03.516 10435.348 - 10485.760: 97.2389% ( 15) 00:19:03.516 10485.760 - 10536.172: 97.3084% ( 13) 00:19:03.516 10536.172 - 10586.585: 97.3833% ( 14) 00:19:03.516 10586.585 - 10636.997: 97.5011% ( 22) 00:19:03.516 10636.997 - 10687.409: 97.5813% ( 15) 00:19:03.516 10687.409 - 10737.822: 97.6348% ( 10) 00:19:03.516 10737.822 - 10788.234: 97.7044% ( 13) 00:19:03.516 10788.234 - 10838.646: 97.7686% ( 12) 00:19:03.516 10838.646 - 10889.058: 97.8649% ( 18) 00:19:03.516 10889.058 - 10939.471: 97.9399% ( 14) 00:19:03.516 10939.471 - 10989.883: 98.0415% ( 19) 00:19:03.516 10989.883 - 11040.295: 98.1378% ( 18) 00:19:03.516 11040.295 - 11090.708: 98.2021% ( 12) 00:19:03.516 11090.708 - 11141.120: 98.2609% ( 11) 00:19:03.516 11141.120 - 11191.532: 98.3198% ( 11) 00:19:03.516 11191.532 - 11241.945: 98.3733% ( 10) 00:19:03.516 11241.945 - 11292.357: 98.4161% ( 8) 00:19:03.516 11292.357 - 11342.769: 98.4643% ( 9) 00:19:03.516 11342.769 - 11393.182: 98.5017% ( 7) 00:19:03.516 11393.182 - 11443.594: 98.5445% ( 8) 00:19:03.516 11443.594 - 11494.006: 98.5873% ( 8) 00:19:03.516 11494.006 - 11544.418: 98.6301% ( 8) 00:19:03.516 11544.418 - 11594.831: 98.6783% ( 9) 00:19:03.516 11594.831 - 11645.243: 98.7158% ( 7) 00:19:03.516 11645.243 - 11695.655: 98.7532% ( 7) 00:19:03.516 11695.655 - 11746.068: 98.7746% ( 4) 00:19:03.516 11746.068 - 11796.480: 98.7907% ( 3) 00:19:03.516 11796.480 - 11846.892: 98.8174% ( 5) 00:19:03.516 11846.892 - 11897.305: 98.8388% ( 4) 00:19:03.516 11897.305 - 11947.717: 98.8656% ( 5) 00:19:03.516 11947.717 - 11998.129: 98.8923% ( 5) 00:19:03.516 11998.129 - 12048.542: 98.9137% ( 4) 00:19:03.516 12048.542 - 12098.954: 98.9298% ( 3) 00:19:03.516 12098.954 - 12149.366: 98.9405% ( 2) 00:19:03.516 12149.366 - 12199.778: 98.9512% ( 2) 00:19:03.516 12199.778 - 12250.191: 98.9673% ( 3) 00:19:03.516 12300.603 - 12351.015: 98.9726% ( 1) 00:19:03.516 12401.428 - 12451.840: 99.0101% ( 7) 00:19:03.516 12451.840 - 12502.252: 99.0208% ( 2) 00:19:03.516 12502.252 - 12552.665: 99.0368% ( 3) 00:19:03.516 12552.665 - 12603.077: 99.0582% ( 4) 00:19:03.516 12603.077 - 12653.489: 99.0796% ( 4) 00:19:03.516 12653.489 - 12703.902: 99.1010% ( 4) 00:19:03.516 12703.902 - 12754.314: 99.1224% ( 4) 00:19:03.516 12754.314 - 12804.726: 99.1385% ( 3) 00:19:03.516 12804.726 - 12855.138: 99.1599% ( 4) 00:19:03.516 12855.138 - 12905.551: 99.1813% ( 4) 00:19:03.516 12905.551 - 13006.375: 99.2188% ( 7) 00:19:03.516 13006.375 - 13107.200: 99.2509% ( 6) 00:19:03.516 13107.200 - 13208.025: 99.2883% ( 7) 00:19:03.516 13208.025 - 13308.849: 99.3151% ( 5) 00:19:03.516 23189.662 - 23290.486: 99.3311% ( 3) 00:19:03.516 23290.486 - 23391.311: 99.3525% ( 4) 00:19:03.516 23391.311 - 23492.135: 99.3739% ( 4) 00:19:03.516 23492.135 - 23592.960: 99.4007% ( 5) 00:19:03.516 23592.960 - 23693.785: 99.4221% ( 4) 00:19:03.516 23693.785 - 23794.609: 99.4435% ( 4) 00:19:03.516 23794.609 - 23895.434: 99.4649% ( 4) 00:19:03.516 23895.434 - 23996.258: 99.4863% ( 4) 00:19:03.516 23996.258 - 24097.083: 99.5077% ( 4) 00:19:03.516 24097.083 - 24197.908: 99.5345% ( 5) 00:19:03.516 24197.908 - 24298.732: 99.5559% ( 4) 00:19:03.516 24298.732 - 24399.557: 99.5773% ( 4) 00:19:03.516 24399.557 - 24500.382: 99.5987% ( 4) 00:19:03.516 24500.382 - 24601.206: 99.6201% ( 4) 00:19:03.516 24601.206 - 24702.031: 99.6415% ( 4) 00:19:03.516 24702.031 - 24802.855: 99.6575% ( 3) 00:19:03.516 27424.295 - 27625.945: 99.6736% ( 3) 00:19:03.516 27625.945 - 27827.594: 99.7164% ( 8) 00:19:03.516 27827.594 - 28029.243: 99.7592% ( 8) 00:19:03.516 28029.243 - 28230.892: 99.8020% ( 8) 00:19:03.516 28230.892 - 28432.542: 99.8502% ( 9) 00:19:03.516 28432.542 - 28634.191: 99.8930% ( 8) 00:19:03.516 28634.191 - 28835.840: 99.9411% ( 9) 00:19:03.516 28835.840 - 29037.489: 99.9839% ( 8) 00:19:03.516 29037.489 - 29239.138: 100.0000% ( 3) 00:19:03.516 00:19:03.516 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:03.516 ============================================================================== 00:19:03.516 Range in us Cumulative IO count 00:19:03.516 5545.354 - 5570.560: 0.0054% ( 1) 00:19:03.516 5570.560 - 5595.766: 0.0375% ( 6) 00:19:03.516 5595.766 - 5620.972: 0.0803% ( 8) 00:19:03.516 5620.972 - 5646.178: 0.1284% ( 9) 00:19:03.516 5646.178 - 5671.385: 0.2568% ( 24) 00:19:03.516 5671.385 - 5696.591: 0.3853% ( 24) 00:19:03.516 5696.591 - 5721.797: 0.5512% ( 31) 00:19:03.516 5721.797 - 5747.003: 0.8080% ( 48) 00:19:03.516 5747.003 - 5772.209: 1.2789% ( 88) 00:19:03.516 5772.209 - 5797.415: 1.9050% ( 117) 00:19:03.516 5797.415 - 5822.622: 2.6702% ( 143) 00:19:03.516 5822.622 - 5847.828: 3.6922% ( 191) 00:19:03.516 5847.828 - 5873.034: 5.0942% ( 262) 00:19:03.516 5873.034 - 5898.240: 6.5925% ( 280) 00:19:03.516 5898.240 - 5923.446: 8.1443% ( 290) 00:19:03.516 5923.446 - 5948.652: 9.9476% ( 337) 00:19:03.516 5948.652 - 5973.858: 12.0024% ( 384) 00:19:03.516 5973.858 - 5999.065: 14.1481% ( 401) 00:19:03.516 5999.065 - 6024.271: 16.0959% ( 364) 00:19:03.516 6024.271 - 6049.477: 17.9848% ( 353) 00:19:03.516 6049.477 - 6074.683: 20.0771% ( 391) 00:19:03.516 6074.683 - 6099.889: 22.2603% ( 408) 00:19:03.516 6099.889 - 6125.095: 24.4863% ( 416) 00:19:03.516 6125.095 - 6150.302: 26.7551% ( 424) 00:19:03.516 6150.302 - 6175.508: 29.1738% ( 452) 00:19:03.516 6175.508 - 6200.714: 31.4533% ( 426) 00:19:03.516 6200.714 - 6225.920: 33.8399% ( 446) 00:19:03.516 6225.920 - 6251.126: 36.1890% ( 439) 00:19:03.517 6251.126 - 6276.332: 38.5274% ( 437) 00:19:03.517 6276.332 - 6301.538: 40.7855% ( 422) 00:19:03.517 6301.538 - 6326.745: 43.1025% ( 433) 00:19:03.517 6326.745 - 6351.951: 45.4302% ( 435) 00:19:03.517 6351.951 - 6377.157: 47.8275% ( 448) 00:19:03.517 6377.157 - 6402.363: 50.2622% ( 455) 00:19:03.517 6402.363 - 6427.569: 52.6809% ( 452) 00:19:03.517 6427.569 - 6452.775: 55.0888% ( 450) 00:19:03.517 6452.775 - 6503.188: 59.8459% ( 889) 00:19:03.517 6503.188 - 6553.600: 64.4852% ( 867) 00:19:03.517 6553.600 - 6604.012: 68.7446% ( 796) 00:19:03.517 6604.012 - 6654.425: 72.3191% ( 668) 00:19:03.517 6654.425 - 6704.837: 75.2354% ( 545) 00:19:03.517 6704.837 - 6755.249: 77.5524% ( 433) 00:19:03.517 6755.249 - 6805.662: 79.2434% ( 316) 00:19:03.517 6805.662 - 6856.074: 80.4420% ( 224) 00:19:03.517 6856.074 - 6906.486: 81.4801% ( 194) 00:19:03.517 6906.486 - 6956.898: 82.2667% ( 147) 00:19:03.517 6956.898 - 7007.311: 82.9302% ( 124) 00:19:03.517 7007.311 - 7057.723: 83.4653% ( 100) 00:19:03.517 7057.723 - 7108.135: 83.9362% ( 88) 00:19:03.517 7108.135 - 7158.548: 84.4392% ( 94) 00:19:03.517 7158.548 - 7208.960: 84.8245% ( 72) 00:19:03.517 7208.960 - 7259.372: 85.1723% ( 65) 00:19:03.517 7259.372 - 7309.785: 85.4934% ( 60) 00:19:03.517 7309.785 - 7360.197: 85.7716% ( 52) 00:19:03.517 7360.197 - 7410.609: 86.0445% ( 51) 00:19:03.517 7410.609 - 7461.022: 86.3121% ( 50) 00:19:03.517 7461.022 - 7511.434: 86.5529% ( 45) 00:19:03.517 7511.434 - 7561.846: 86.7937% ( 45) 00:19:03.517 7561.846 - 7612.258: 87.0452% ( 47) 00:19:03.517 7612.258 - 7662.671: 87.2699% ( 42) 00:19:03.517 7662.671 - 7713.083: 87.4893% ( 41) 00:19:03.517 7713.083 - 7763.495: 87.7461% ( 48) 00:19:03.517 7763.495 - 7813.908: 88.0083% ( 49) 00:19:03.517 7813.908 - 7864.320: 88.2010% ( 36) 00:19:03.517 7864.320 - 7914.732: 88.4150% ( 40) 00:19:03.517 7914.732 - 7965.145: 88.6077% ( 36) 00:19:03.517 7965.145 - 8015.557: 88.7735% ( 31) 00:19:03.517 8015.557 - 8065.969: 88.9608% ( 35) 00:19:03.517 8065.969 - 8116.382: 89.1695% ( 39) 00:19:03.517 8116.382 - 8166.794: 89.3354% ( 31) 00:19:03.517 8166.794 - 8217.206: 89.5173% ( 34) 00:19:03.517 8217.206 - 8267.618: 89.6993% ( 34) 00:19:03.517 8267.618 - 8318.031: 89.8759% ( 33) 00:19:03.517 8318.031 - 8368.443: 90.0685% ( 36) 00:19:03.517 8368.443 - 8418.855: 90.2772% ( 39) 00:19:03.517 8418.855 - 8469.268: 90.4591% ( 34) 00:19:03.517 8469.268 - 8519.680: 90.6625% ( 38) 00:19:03.517 8519.680 - 8570.092: 90.8604% ( 37) 00:19:03.517 8570.092 - 8620.505: 91.0370% ( 33) 00:19:03.517 8620.505 - 8670.917: 91.2190% ( 34) 00:19:03.517 8670.917 - 8721.329: 91.4384% ( 41) 00:19:03.517 8721.329 - 8771.742: 91.6685% ( 43) 00:19:03.517 8771.742 - 8822.154: 91.8825% ( 40) 00:19:03.517 8822.154 - 8872.566: 92.1286% ( 46) 00:19:03.517 8872.566 - 8922.978: 92.3373% ( 39) 00:19:03.517 8922.978 - 8973.391: 92.5781% ( 45) 00:19:03.517 8973.391 - 9023.803: 92.8457% ( 50) 00:19:03.517 9023.803 - 9074.215: 93.0811% ( 44) 00:19:03.517 9074.215 - 9124.628: 93.3540% ( 51) 00:19:03.517 9124.628 - 9175.040: 93.6216% ( 50) 00:19:03.517 9175.040 - 9225.452: 93.8570% ( 44) 00:19:03.517 9225.452 - 9275.865: 94.0925% ( 44) 00:19:03.517 9275.865 - 9326.277: 94.2905% ( 37) 00:19:03.517 9326.277 - 9376.689: 94.4991% ( 39) 00:19:03.517 9376.689 - 9427.102: 94.6971% ( 37) 00:19:03.517 9427.102 - 9477.514: 94.8844% ( 35) 00:19:03.517 9477.514 - 9527.926: 95.0557% ( 32) 00:19:03.517 9527.926 - 9578.338: 95.1894% ( 25) 00:19:03.517 9578.338 - 9628.751: 95.3071% ( 22) 00:19:03.517 9628.751 - 9679.163: 95.4409% ( 25) 00:19:03.517 9679.163 - 9729.575: 95.5747% ( 25) 00:19:03.517 9729.575 - 9779.988: 95.7673% ( 36) 00:19:03.517 9779.988 - 9830.400: 95.9011% ( 25) 00:19:03.517 9830.400 - 9880.812: 95.9921% ( 17) 00:19:03.517 9880.812 - 9931.225: 96.0991% ( 20) 00:19:03.517 9931.225 - 9981.637: 96.1847% ( 16) 00:19:03.517 9981.637 - 10032.049: 96.2650% ( 15) 00:19:03.517 10032.049 - 10082.462: 96.3613% ( 18) 00:19:03.517 10082.462 - 10132.874: 96.4523% ( 17) 00:19:03.517 10132.874 - 10183.286: 96.5379% ( 16) 00:19:03.517 10183.286 - 10233.698: 96.6717% ( 25) 00:19:03.517 10233.698 - 10284.111: 96.7840% ( 21) 00:19:03.517 10284.111 - 10334.523: 96.8964% ( 21) 00:19:03.517 10334.523 - 10384.935: 96.9981% ( 19) 00:19:03.517 10384.935 - 10435.348: 97.1104% ( 21) 00:19:03.517 10435.348 - 10485.760: 97.2228% ( 21) 00:19:03.517 10485.760 - 10536.172: 97.3566% ( 25) 00:19:03.517 10536.172 - 10586.585: 97.4690% ( 21) 00:19:03.517 10586.585 - 10636.997: 97.5813% ( 21) 00:19:03.517 10636.997 - 10687.409: 97.6723% ( 17) 00:19:03.517 10687.409 - 10737.822: 97.7579% ( 16) 00:19:03.517 10737.822 - 10788.234: 97.8382% ( 15) 00:19:03.517 10788.234 - 10838.646: 97.8970% ( 11) 00:19:03.517 10838.646 - 10889.058: 97.9720% ( 14) 00:19:03.517 10889.058 - 10939.471: 98.0255% ( 10) 00:19:03.517 10939.471 - 10989.883: 98.0950% ( 13) 00:19:03.517 10989.883 - 11040.295: 98.1753% ( 15) 00:19:03.517 11040.295 - 11090.708: 98.2288% ( 10) 00:19:03.517 11090.708 - 11141.120: 98.2984% ( 13) 00:19:03.517 11141.120 - 11191.532: 98.3572% ( 11) 00:19:03.517 11191.532 - 11241.945: 98.4268% ( 13) 00:19:03.517 11241.945 - 11292.357: 98.5017% ( 14) 00:19:03.517 11292.357 - 11342.769: 98.5659% ( 12) 00:19:03.517 11342.769 - 11393.182: 98.6194% ( 10) 00:19:03.517 11393.182 - 11443.594: 98.6943% ( 14) 00:19:03.517 11443.594 - 11494.006: 98.7532% ( 11) 00:19:03.517 11494.006 - 11544.418: 98.8121% ( 11) 00:19:03.517 11544.418 - 11594.831: 98.8495% ( 7) 00:19:03.517 11594.831 - 11645.243: 98.8870% ( 7) 00:19:03.517 11645.243 - 11695.655: 98.9137% ( 5) 00:19:03.517 11695.655 - 11746.068: 98.9458% ( 6) 00:19:03.517 11746.068 - 11796.480: 98.9619% ( 3) 00:19:03.517 11796.480 - 11846.892: 98.9726% ( 2) 00:19:03.517 12804.726 - 12855.138: 98.9780% ( 1) 00:19:03.517 12855.138 - 12905.551: 99.0154% ( 7) 00:19:03.517 12905.551 - 13006.375: 99.0475% ( 6) 00:19:03.517 13006.375 - 13107.200: 99.0903% ( 8) 00:19:03.517 13107.200 - 13208.025: 99.1278% ( 7) 00:19:03.517 13208.025 - 13308.849: 99.1813% ( 10) 00:19:03.517 13308.849 - 13409.674: 99.2241% ( 8) 00:19:03.517 13409.674 - 13510.498: 99.2616% ( 7) 00:19:03.517 13510.498 - 13611.323: 99.2937% ( 6) 00:19:03.517 13611.323 - 13712.148: 99.3151% ( 4) 00:19:03.517 21273.994 - 21374.818: 99.3365% ( 4) 00:19:03.517 21374.818 - 21475.643: 99.3579% ( 4) 00:19:03.517 21475.643 - 21576.468: 99.3793% ( 4) 00:19:03.517 21576.468 - 21677.292: 99.4060% ( 5) 00:19:03.517 21677.292 - 21778.117: 99.4274% ( 4) 00:19:03.517 21778.117 - 21878.942: 99.4488% ( 4) 00:19:03.517 21878.942 - 21979.766: 99.4702% ( 4) 00:19:03.517 21979.766 - 22080.591: 99.4970% ( 5) 00:19:03.517 22080.591 - 22181.415: 99.5184% ( 4) 00:19:03.517 22181.415 - 22282.240: 99.5398% ( 4) 00:19:03.517 22282.240 - 22383.065: 99.5612% ( 4) 00:19:03.517 22383.065 - 22483.889: 99.5826% ( 4) 00:19:03.517 22483.889 - 22584.714: 99.6040% ( 4) 00:19:03.517 22584.714 - 22685.538: 99.6254% ( 4) 00:19:03.517 22685.538 - 22786.363: 99.6468% ( 4) 00:19:03.517 22786.363 - 22887.188: 99.6575% ( 2) 00:19:03.517 25609.452 - 25710.277: 99.6629% ( 1) 00:19:03.517 25710.277 - 25811.102: 99.6843% ( 4) 00:19:03.517 25811.102 - 26012.751: 99.7324% ( 9) 00:19:03.517 26012.751 - 26214.400: 99.7699% ( 7) 00:19:03.517 26214.400 - 26416.049: 99.8127% ( 8) 00:19:03.517 26416.049 - 26617.698: 99.8555% ( 8) 00:19:03.517 26617.698 - 26819.348: 99.8983% ( 8) 00:19:03.517 26819.348 - 27020.997: 99.9465% ( 9) 00:19:03.517 27020.997 - 27222.646: 99.9786% ( 6) 00:19:03.517 27222.646 - 27424.295: 100.0000% ( 4) 00:19:03.517 00:19:03.517 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:03.517 ============================================================================== 00:19:03.517 Range in us Cumulative IO count 00:19:03.517 5545.354 - 5570.560: 0.0107% ( 2) 00:19:03.517 5570.560 - 5595.766: 0.0427% ( 6) 00:19:03.517 5595.766 - 5620.972: 0.0907% ( 9) 00:19:03.517 5620.972 - 5646.178: 0.1440% ( 10) 00:19:03.517 5646.178 - 5671.385: 0.2293% ( 16) 00:19:03.517 5671.385 - 5696.591: 0.3360% ( 20) 00:19:03.517 5696.591 - 5721.797: 0.4480% ( 21) 00:19:03.517 5721.797 - 5747.003: 0.7039% ( 48) 00:19:03.517 5747.003 - 5772.209: 1.1785% ( 89) 00:19:03.517 5772.209 - 5797.415: 1.7971% ( 116) 00:19:03.517 5797.415 - 5822.622: 2.5597% ( 143) 00:19:03.517 5822.622 - 5847.828: 3.6103% ( 197) 00:19:03.517 5847.828 - 5873.034: 4.8155% ( 226) 00:19:03.517 5873.034 - 5898.240: 6.2127% ( 262) 00:19:03.517 5898.240 - 5923.446: 8.0045% ( 336) 00:19:03.517 5923.446 - 5948.652: 10.0149% ( 377) 00:19:03.517 5948.652 - 5973.858: 11.8441% ( 343) 00:19:03.517 5973.858 - 5999.065: 14.0732% ( 418) 00:19:03.517 5999.065 - 6024.271: 16.2276% ( 404) 00:19:03.517 6024.271 - 6049.477: 18.3554% ( 399) 00:19:03.517 6049.477 - 6074.683: 20.3712% ( 378) 00:19:03.517 6074.683 - 6099.889: 22.4456% ( 389) 00:19:03.517 6099.889 - 6125.095: 24.6267% ( 409) 00:19:03.517 6125.095 - 6150.302: 26.9411% ( 434) 00:19:03.517 6150.302 - 6175.508: 29.2449% ( 432) 00:19:03.517 6175.508 - 6200.714: 31.5380% ( 430) 00:19:03.517 6200.714 - 6225.920: 33.7351% ( 412) 00:19:03.517 6225.920 - 6251.126: 36.0922% ( 442) 00:19:03.518 6251.126 - 6276.332: 38.4439% ( 441) 00:19:03.518 6276.332 - 6301.538: 40.8170% ( 445) 00:19:03.518 6301.538 - 6326.745: 43.2221% ( 451) 00:19:03.518 6326.745 - 6351.951: 45.6165% ( 449) 00:19:03.518 6351.951 - 6377.157: 48.0695% ( 460) 00:19:03.518 6377.157 - 6402.363: 50.5386% ( 463) 00:19:03.518 6402.363 - 6427.569: 52.9170% ( 446) 00:19:03.518 6427.569 - 6452.775: 55.3594% ( 458) 00:19:03.518 6452.775 - 6503.188: 60.0576% ( 881) 00:19:03.518 6503.188 - 6553.600: 64.7078% ( 872) 00:19:03.518 6553.600 - 6604.012: 68.9366% ( 793) 00:19:03.518 6604.012 - 6654.425: 72.5149% ( 671) 00:19:03.518 6654.425 - 6704.837: 75.4799% ( 556) 00:19:03.518 6704.837 - 6755.249: 77.7997% ( 435) 00:19:03.518 6755.249 - 6805.662: 79.5488% ( 328) 00:19:03.518 6805.662 - 6856.074: 80.7860% ( 232) 00:19:03.518 6856.074 - 6906.486: 81.8206% ( 194) 00:19:03.518 6906.486 - 6956.898: 82.5939% ( 145) 00:19:03.518 6956.898 - 7007.311: 83.1965% ( 113) 00:19:03.518 7007.311 - 7057.723: 83.7084% ( 96) 00:19:03.518 7057.723 - 7108.135: 84.0817% ( 70) 00:19:03.518 7108.135 - 7158.548: 84.4390% ( 67) 00:19:03.518 7158.548 - 7208.960: 84.7856% ( 65) 00:19:03.518 7208.960 - 7259.372: 85.0896% ( 57) 00:19:03.518 7259.372 - 7309.785: 85.3722% ( 53) 00:19:03.518 7309.785 - 7360.197: 85.6015% ( 43) 00:19:03.518 7360.197 - 7410.609: 85.8948% ( 55) 00:19:03.518 7410.609 - 7461.022: 86.1615% ( 50) 00:19:03.518 7461.022 - 7511.434: 86.3801% ( 41) 00:19:03.518 7511.434 - 7561.846: 86.6094% ( 43) 00:19:03.518 7561.846 - 7612.258: 86.8494% ( 45) 00:19:03.518 7612.258 - 7662.671: 87.0840% ( 44) 00:19:03.518 7662.671 - 7713.083: 87.2760% ( 36) 00:19:03.518 7713.083 - 7763.495: 87.5000% ( 42) 00:19:03.518 7763.495 - 7813.908: 87.7453% ( 46) 00:19:03.518 7813.908 - 7864.320: 87.9959% ( 47) 00:19:03.518 7864.320 - 7914.732: 88.2199% ( 42) 00:19:03.518 7914.732 - 7965.145: 88.4386% ( 41) 00:19:03.518 7965.145 - 8015.557: 88.6465% ( 39) 00:19:03.518 8015.557 - 8065.969: 88.8705% ( 42) 00:19:03.518 8065.969 - 8116.382: 89.1052% ( 44) 00:19:03.518 8116.382 - 8166.794: 89.3078% ( 38) 00:19:03.518 8166.794 - 8217.206: 89.4891% ( 34) 00:19:03.518 8217.206 - 8267.618: 89.6758% ( 35) 00:19:03.518 8267.618 - 8318.031: 89.8358% ( 30) 00:19:03.518 8318.031 - 8368.443: 89.9691% ( 25) 00:19:03.518 8368.443 - 8418.855: 90.1184% ( 28) 00:19:03.518 8418.855 - 8469.268: 90.2837% ( 31) 00:19:03.518 8469.268 - 8519.680: 90.4650% ( 34) 00:19:03.518 8519.680 - 8570.092: 90.6623% ( 37) 00:19:03.518 8570.092 - 8620.505: 90.8703% ( 39) 00:19:03.518 8620.505 - 8670.917: 91.1423% ( 51) 00:19:03.518 8670.917 - 8721.329: 91.3396% ( 37) 00:19:03.518 8721.329 - 8771.742: 91.5209% ( 34) 00:19:03.518 8771.742 - 8822.154: 91.7449% ( 42) 00:19:03.518 8822.154 - 8872.566: 91.9209% ( 33) 00:19:03.518 8872.566 - 8922.978: 92.1822% ( 49) 00:19:03.518 8922.978 - 8973.391: 92.4328% ( 47) 00:19:03.518 8973.391 - 9023.803: 92.6834% ( 47) 00:19:03.518 9023.803 - 9074.215: 92.9234% ( 45) 00:19:03.518 9074.215 - 9124.628: 93.1847% ( 49) 00:19:03.518 9124.628 - 9175.040: 93.4247% ( 45) 00:19:03.518 9175.040 - 9225.452: 93.6487% ( 42) 00:19:03.518 9225.452 - 9275.865: 93.8940% ( 46) 00:19:03.518 9275.865 - 9326.277: 94.1553% ( 49) 00:19:03.518 9326.277 - 9376.689: 94.3846% ( 43) 00:19:03.518 9376.689 - 9427.102: 94.5712% ( 35) 00:19:03.518 9427.102 - 9477.514: 94.7579% ( 35) 00:19:03.518 9477.514 - 9527.926: 94.9445% ( 35) 00:19:03.518 9527.926 - 9578.338: 95.1419% ( 37) 00:19:03.518 9578.338 - 9628.751: 95.3818% ( 45) 00:19:03.518 9628.751 - 9679.163: 95.5578% ( 33) 00:19:03.518 9679.163 - 9729.575: 95.7071% ( 28) 00:19:03.518 9729.575 - 9779.988: 95.8511% ( 27) 00:19:03.518 9779.988 - 9830.400: 95.9738% ( 23) 00:19:03.518 9830.400 - 9880.812: 96.1337% ( 30) 00:19:03.518 9880.812 - 9931.225: 96.2297% ( 18) 00:19:03.518 9931.225 - 9981.637: 96.3311% ( 19) 00:19:03.518 9981.637 - 10032.049: 96.4324% ( 19) 00:19:03.518 10032.049 - 10082.462: 96.5497% ( 22) 00:19:03.518 10082.462 - 10132.874: 96.7097% ( 30) 00:19:03.518 10132.874 - 10183.286: 96.7950% ( 16) 00:19:03.518 10183.286 - 10233.698: 96.8750% ( 15) 00:19:03.518 10233.698 - 10284.111: 96.9603% ( 16) 00:19:03.518 10284.111 - 10334.523: 97.0456% ( 16) 00:19:03.518 10334.523 - 10384.935: 97.1363% ( 17) 00:19:03.518 10384.935 - 10435.348: 97.2216% ( 16) 00:19:03.518 10435.348 - 10485.760: 97.2963% ( 14) 00:19:03.518 10485.760 - 10536.172: 97.3763% ( 15) 00:19:03.518 10536.172 - 10586.585: 97.4723% ( 18) 00:19:03.518 10586.585 - 10636.997: 97.5683% ( 18) 00:19:03.518 10636.997 - 10687.409: 97.6642% ( 18) 00:19:03.518 10687.409 - 10737.822: 97.7442% ( 15) 00:19:03.518 10737.822 - 10788.234: 97.8349% ( 17) 00:19:03.518 10788.234 - 10838.646: 97.9362% ( 19) 00:19:03.518 10838.646 - 10889.058: 98.0109% ( 14) 00:19:03.518 10889.058 - 10939.471: 98.0855% ( 14) 00:19:03.518 10939.471 - 10989.883: 98.1655% ( 15) 00:19:03.518 10989.883 - 11040.295: 98.2455% ( 15) 00:19:03.518 11040.295 - 11090.708: 98.2882% ( 8) 00:19:03.518 11090.708 - 11141.120: 98.3202% ( 6) 00:19:03.518 11141.120 - 11191.532: 98.3522% ( 6) 00:19:03.518 11191.532 - 11241.945: 98.3895% ( 7) 00:19:03.518 11241.945 - 11292.357: 98.4215% ( 6) 00:19:03.518 11292.357 - 11342.769: 98.4428% ( 4) 00:19:03.518 11342.769 - 11393.182: 98.4855% ( 8) 00:19:03.518 11393.182 - 11443.594: 98.5228% ( 7) 00:19:03.518 11443.594 - 11494.006: 98.5602% ( 7) 00:19:03.518 11494.006 - 11544.418: 98.5868% ( 5) 00:19:03.518 11544.418 - 11594.831: 98.6188% ( 6) 00:19:03.518 11594.831 - 11645.243: 98.6455% ( 5) 00:19:03.518 11645.243 - 11695.655: 98.6828% ( 7) 00:19:03.518 11695.655 - 11746.068: 98.7095% ( 5) 00:19:03.518 11746.068 - 11796.480: 98.7361% ( 5) 00:19:03.518 11796.480 - 11846.892: 98.7681% ( 6) 00:19:03.518 11846.892 - 11897.305: 98.8055% ( 7) 00:19:03.518 11897.305 - 11947.717: 98.8321% ( 5) 00:19:03.518 11947.717 - 11998.129: 98.8641% ( 6) 00:19:03.518 11998.129 - 12048.542: 98.8961% ( 6) 00:19:03.518 12048.542 - 12098.954: 98.9174% ( 4) 00:19:03.518 12098.954 - 12149.366: 98.9334% ( 3) 00:19:03.518 12149.366 - 12199.778: 98.9548% ( 4) 00:19:03.518 12199.778 - 12250.191: 98.9708% ( 3) 00:19:03.518 12250.191 - 12300.603: 98.9761% ( 1) 00:19:03.518 12905.551 - 13006.375: 99.0241% ( 9) 00:19:03.518 13006.375 - 13107.200: 99.0614% ( 7) 00:19:03.518 13107.200 - 13208.025: 99.1094% ( 9) 00:19:03.518 13208.025 - 13308.849: 99.1468% ( 7) 00:19:03.518 13308.849 - 13409.674: 99.1894% ( 8) 00:19:03.518 13409.674 - 13510.498: 99.2267% ( 7) 00:19:03.518 13510.498 - 13611.323: 99.2694% ( 8) 00:19:03.518 13611.323 - 13712.148: 99.3067% ( 7) 00:19:03.518 13712.148 - 13812.972: 99.3174% ( 2) 00:19:03.518 16131.938 - 16232.763: 99.3281% ( 2) 00:19:03.518 16232.763 - 16333.588: 99.3494% ( 4) 00:19:03.518 16333.588 - 16434.412: 99.3707% ( 4) 00:19:03.518 16434.412 - 16535.237: 99.3974% ( 5) 00:19:03.518 16535.237 - 16636.062: 99.4187% ( 4) 00:19:03.518 16636.062 - 16736.886: 99.4401% ( 4) 00:19:03.518 16736.886 - 16837.711: 99.4614% ( 4) 00:19:03.518 16837.711 - 16938.535: 99.4827% ( 4) 00:19:03.518 16938.535 - 17039.360: 99.5094% ( 5) 00:19:03.518 17039.360 - 17140.185: 99.5307% ( 4) 00:19:03.518 17140.185 - 17241.009: 99.5520% ( 4) 00:19:03.518 17241.009 - 17341.834: 99.5734% ( 4) 00:19:03.518 17341.834 - 17442.658: 99.6000% ( 5) 00:19:03.518 17442.658 - 17543.483: 99.6214% ( 4) 00:19:03.518 17543.483 - 17644.308: 99.6427% ( 4) 00:19:03.518 17644.308 - 17745.132: 99.6587% ( 3) 00:19:03.518 20568.222 - 20669.046: 99.6694% ( 2) 00:19:03.518 20669.046 - 20769.871: 99.6907% ( 4) 00:19:03.518 20769.871 - 20870.695: 99.7120% ( 4) 00:19:03.518 20870.695 - 20971.520: 99.7334% ( 4) 00:19:03.518 20971.520 - 21072.345: 99.7547% ( 4) 00:19:03.518 21072.345 - 21173.169: 99.7814% ( 5) 00:19:03.518 21173.169 - 21273.994: 99.8027% ( 4) 00:19:03.518 21273.994 - 21374.818: 99.8240% ( 4) 00:19:03.518 21374.818 - 21475.643: 99.8453% ( 4) 00:19:03.518 21475.643 - 21576.468: 99.8667% ( 4) 00:19:03.518 21576.468 - 21677.292: 99.8933% ( 5) 00:19:03.518 21677.292 - 21778.117: 99.9147% ( 4) 00:19:03.518 21778.117 - 21878.942: 99.9360% ( 4) 00:19:03.518 21878.942 - 21979.766: 99.9573% ( 4) 00:19:03.518 21979.766 - 22080.591: 99.9787% ( 4) 00:19:03.518 22080.591 - 22181.415: 100.0000% ( 4) 00:19:03.518 00:19:03.518 04:41:10 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:04.894 Initializing NVMe Controllers 00:19:04.894 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:04.894 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:04.894 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:04.894 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:04.894 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:04.894 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:04.895 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:04.895 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:04.895 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:04.895 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:04.895 Initialization complete. Launching workers. 00:19:04.895 ======================================================== 00:19:04.895 Latency(us) 00:19:04.895 Device Information : IOPS MiB/s Average min max 00:19:04.895 PCIE (0000:00:10.0) NSID 1 from core 0: 17347.20 203.29 7390.42 6107.38 28540.62 00:19:04.895 PCIE (0000:00:11.0) NSID 1 from core 0: 17347.20 203.29 7382.38 6203.64 27359.32 00:19:04.895 PCIE (0000:00:13.0) NSID 1 from core 0: 17347.20 203.29 7374.13 6213.88 26354.11 00:19:04.895 PCIE (0000:00:12.0) NSID 1 from core 0: 17347.20 203.29 7365.74 6097.91 25302.75 00:19:04.895 PCIE (0000:00:12.0) NSID 2 from core 0: 17347.20 203.29 7357.39 6066.71 23704.56 00:19:04.895 PCIE (0000:00:12.0) NSID 3 from core 0: 17347.20 203.29 7348.74 6130.47 22147.33 00:19:04.895 ======================================================== 00:19:04.895 Total : 104083.19 1219.72 7369.80 6066.71 28540.62 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6452.775us 00:19:04.895 10.00000% : 6704.837us 00:19:04.895 25.00000% : 6906.486us 00:19:04.895 50.00000% : 7158.548us 00:19:04.895 75.00000% : 7511.434us 00:19:04.895 90.00000% : 8065.969us 00:19:04.895 95.00000% : 8620.505us 00:19:04.895 98.00000% : 9830.400us 00:19:04.895 99.00000% : 10687.409us 00:19:04.895 99.50000% : 22282.240us 00:19:04.895 99.90000% : 28230.892us 00:19:04.895 99.99000% : 28634.191us 00:19:04.895 99.99900% : 28634.191us 00:19:04.895 99.99990% : 28634.191us 00:19:04.895 99.99999% : 28634.191us 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6553.600us 00:19:04.895 10.00000% : 6805.662us 00:19:04.895 25.00000% : 6956.898us 00:19:04.895 50.00000% : 7108.135us 00:19:04.895 75.00000% : 7410.609us 00:19:04.895 90.00000% : 8015.557us 00:19:04.895 95.00000% : 8570.092us 00:19:04.895 98.00000% : 9578.338us 00:19:04.895 99.00000% : 10737.822us 00:19:04.895 99.50000% : 21273.994us 00:19:04.895 99.90000% : 27020.997us 00:19:04.895 99.99000% : 27424.295us 00:19:04.895 99.99900% : 27424.295us 00:19:04.895 99.99990% : 27424.295us 00:19:04.895 99.99999% : 27424.295us 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6553.600us 00:19:04.895 10.00000% : 6805.662us 00:19:04.895 25.00000% : 6956.898us 00:19:04.895 50.00000% : 7108.135us 00:19:04.895 75.00000% : 7461.022us 00:19:04.895 90.00000% : 7965.145us 00:19:04.895 95.00000% : 8670.917us 00:19:04.895 98.00000% : 9578.338us 00:19:04.895 99.00000% : 10788.234us 00:19:04.895 99.50000% : 20769.871us 00:19:04.895 99.90000% : 26012.751us 00:19:04.895 99.99000% : 26416.049us 00:19:04.895 99.99900% : 26416.049us 00:19:04.895 99.99990% : 26416.049us 00:19:04.895 99.99999% : 26416.049us 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6553.600us 00:19:04.895 10.00000% : 6805.662us 00:19:04.895 25.00000% : 6956.898us 00:19:04.895 50.00000% : 7108.135us 00:19:04.895 75.00000% : 7461.022us 00:19:04.895 90.00000% : 7965.145us 00:19:04.895 95.00000% : 8620.505us 00:19:04.895 98.00000% : 9830.400us 00:19:04.895 99.00000% : 10737.822us 00:19:04.895 99.50000% : 19156.677us 00:19:04.895 99.90000% : 24903.680us 00:19:04.895 99.99000% : 25306.978us 00:19:04.895 99.99900% : 25306.978us 00:19:04.895 99.99990% : 25306.978us 00:19:04.895 99.99999% : 25306.978us 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6553.600us 00:19:04.895 10.00000% : 6805.662us 00:19:04.895 25.00000% : 6956.898us 00:19:04.895 50.00000% : 7108.135us 00:19:04.895 75.00000% : 7461.022us 00:19:04.895 90.00000% : 7914.732us 00:19:04.895 95.00000% : 8570.092us 00:19:04.895 98.00000% : 9931.225us 00:19:04.895 99.00000% : 10636.997us 00:19:04.895 99.50000% : 18753.378us 00:19:04.895 99.90000% : 23290.486us 00:19:04.895 99.99000% : 23693.785us 00:19:04.895 99.99900% : 23794.609us 00:19:04.895 99.99990% : 23794.609us 00:19:04.895 99.99999% : 23794.609us 00:19:04.895 00:19:04.895 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:04.895 ================================================================================= 00:19:04.895 1.00000% : 6553.600us 00:19:04.895 10.00000% : 6805.662us 00:19:04.895 25.00000% : 6956.898us 00:19:04.895 50.00000% : 7108.135us 00:19:04.895 75.00000% : 7410.609us 00:19:04.895 90.00000% : 8015.557us 00:19:04.895 95.00000% : 8519.680us 00:19:04.895 98.00000% : 9880.812us 00:19:04.895 99.00000% : 10636.997us 00:19:04.895 99.50000% : 17644.308us 00:19:04.895 99.90000% : 21173.169us 00:19:04.895 99.99000% : 21374.818us 00:19:04.895 99.99900% : 22181.415us 00:19:04.895 99.99990% : 22181.415us 00:19:04.895 99.99999% : 22181.415us 00:19:04.895 00:19:04.895 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:04.895 ============================================================================== 00:19:04.895 Range in us Cumulative IO count 00:19:04.895 6099.889 - 6125.095: 0.0115% ( 2) 00:19:04.895 6125.095 - 6150.302: 0.0230% ( 2) 00:19:04.895 6150.302 - 6175.508: 0.0345% ( 2) 00:19:04.895 6175.508 - 6200.714: 0.0747% ( 7) 00:19:04.895 6200.714 - 6225.920: 0.0919% ( 3) 00:19:04.895 6225.920 - 6251.126: 0.1091% ( 3) 00:19:04.895 6251.126 - 6276.332: 0.1321% ( 4) 00:19:04.895 6276.332 - 6301.538: 0.1781% ( 8) 00:19:04.895 6301.538 - 6326.745: 0.3159% ( 24) 00:19:04.895 6326.745 - 6351.951: 0.3964% ( 14) 00:19:04.895 6351.951 - 6377.157: 0.5744% ( 31) 00:19:04.895 6377.157 - 6402.363: 0.7353% ( 28) 00:19:04.895 6402.363 - 6427.569: 0.9651% ( 40) 00:19:04.895 6427.569 - 6452.775: 1.2178% ( 44) 00:19:04.895 6452.775 - 6503.188: 1.8727% ( 114) 00:19:04.895 6503.188 - 6553.600: 3.0273% ( 201) 00:19:04.895 6553.600 - 6604.012: 5.0264% ( 348) 00:19:04.895 6604.012 - 6654.425: 7.6172% ( 451) 00:19:04.895 6654.425 - 6704.837: 11.5636% ( 687) 00:19:04.895 6704.837 - 6755.249: 15.6135% ( 705) 00:19:04.895 6755.249 - 6805.662: 19.6461% ( 702) 00:19:04.895 6805.662 - 6856.074: 24.7587% ( 890) 00:19:04.895 6856.074 - 6906.486: 29.6530% ( 852) 00:19:04.895 6906.486 - 6956.898: 34.9207% ( 917) 00:19:04.895 6956.898 - 7007.311: 39.9184% ( 870) 00:19:04.895 7007.311 - 7057.723: 45.0080% ( 886) 00:19:04.895 7057.723 - 7108.135: 49.5347% ( 788) 00:19:04.895 7108.135 - 7158.548: 53.8948% ( 759) 00:19:04.895 7158.548 - 7208.960: 57.8297% ( 685) 00:19:04.895 7208.960 - 7259.372: 62.1898% ( 759) 00:19:04.895 7259.372 - 7309.785: 65.6997% ( 611) 00:19:04.895 7309.785 - 7360.197: 68.4283% ( 475) 00:19:04.895 7360.197 - 7410.609: 71.4384% ( 524) 00:19:04.895 7410.609 - 7461.022: 73.8683% ( 423) 00:19:04.895 7461.022 - 7511.434: 76.4131% ( 443) 00:19:04.895 7511.434 - 7561.846: 78.3146% ( 331) 00:19:04.895 7561.846 - 7612.258: 80.2045% ( 329) 00:19:04.895 7612.258 - 7662.671: 81.9278% ( 300) 00:19:04.895 7662.671 - 7713.083: 83.6167% ( 294) 00:19:04.895 7713.083 - 7763.495: 85.2367% ( 282) 00:19:04.895 7763.495 - 7813.908: 86.3856% ( 200) 00:19:04.895 7813.908 - 7864.320: 87.3909% ( 175) 00:19:04.895 7864.320 - 7914.732: 88.2698% ( 153) 00:19:04.895 7914.732 - 7965.145: 89.0682% ( 139) 00:19:04.895 7965.145 - 8015.557: 89.7863% ( 125) 00:19:04.895 8015.557 - 8065.969: 90.5331% ( 130) 00:19:04.895 8065.969 - 8116.382: 91.2569% ( 126) 00:19:04.895 8116.382 - 8166.794: 91.9692% ( 124) 00:19:04.895 8166.794 - 8217.206: 92.4632% ( 86) 00:19:04.895 8217.206 - 8267.618: 92.9975% ( 93) 00:19:04.895 8267.618 - 8318.031: 93.3824% ( 67) 00:19:04.895 8318.031 - 8368.443: 93.8132% ( 75) 00:19:04.895 8368.443 - 8418.855: 94.1234% ( 54) 00:19:04.895 8418.855 - 8469.268: 94.4164% ( 51) 00:19:04.895 8469.268 - 8519.680: 94.7036% ( 50) 00:19:04.895 8519.680 - 8570.092: 94.9219% ( 38) 00:19:04.895 8570.092 - 8620.505: 95.1689% ( 43) 00:19:04.895 8620.505 - 8670.917: 95.4102% ( 42) 00:19:04.895 8670.917 - 8721.329: 95.7433% ( 58) 00:19:04.895 8721.329 - 8771.742: 95.9386% ( 34) 00:19:04.895 8771.742 - 8822.154: 96.1340% ( 34) 00:19:04.895 8822.154 - 8872.566: 96.3293% ( 34) 00:19:04.895 8872.566 - 8922.978: 96.4959% ( 29) 00:19:04.895 8922.978 - 8973.391: 96.6452% ( 26) 00:19:04.895 8973.391 - 9023.803: 96.7314% ( 15) 00:19:04.896 9023.803 - 9074.215: 96.8003% ( 12) 00:19:04.896 9074.215 - 9124.628: 96.8520% ( 9) 00:19:04.896 9124.628 - 9175.040: 96.9669% ( 20) 00:19:04.896 9175.040 - 9225.452: 97.0818% ( 20) 00:19:04.896 9225.452 - 9275.865: 97.2197% ( 24) 00:19:04.896 9275.865 - 9326.277: 97.4265% ( 36) 00:19:04.896 9326.277 - 9376.689: 97.5069% ( 14) 00:19:04.896 9376.689 - 9427.102: 97.5758% ( 12) 00:19:04.896 9427.102 - 9477.514: 97.6505% ( 13) 00:19:04.896 9477.514 - 9527.926: 97.7137% ( 11) 00:19:04.896 9527.926 - 9578.338: 97.7711% ( 10) 00:19:04.896 9578.338 - 9628.751: 97.8228% ( 9) 00:19:04.896 9628.751 - 9679.163: 97.8745% ( 9) 00:19:04.896 9679.163 - 9729.575: 97.9205% ( 8) 00:19:04.896 9729.575 - 9779.988: 97.9952% ( 13) 00:19:04.896 9779.988 - 9830.400: 98.0699% ( 13) 00:19:04.896 9830.400 - 9880.812: 98.1560% ( 15) 00:19:04.896 9880.812 - 9931.225: 98.2135% ( 10) 00:19:04.896 9931.225 - 9981.637: 98.3284% ( 20) 00:19:04.896 9981.637 - 10032.049: 98.3571% ( 5) 00:19:04.896 10032.049 - 10082.462: 98.4145% ( 10) 00:19:04.896 10082.462 - 10132.874: 98.4375% ( 4) 00:19:04.896 10132.874 - 10183.286: 98.5007% ( 11) 00:19:04.896 10183.286 - 10233.698: 98.5466% ( 8) 00:19:04.896 10233.698 - 10284.111: 98.6041% ( 10) 00:19:04.896 10284.111 - 10334.523: 98.6788% ( 13) 00:19:04.896 10334.523 - 10384.935: 98.7305% ( 9) 00:19:04.896 10384.935 - 10435.348: 98.7707% ( 7) 00:19:04.896 10435.348 - 10485.760: 98.8224% ( 9) 00:19:04.896 10485.760 - 10536.172: 98.8626% ( 7) 00:19:04.896 10536.172 - 10586.585: 98.9200% ( 10) 00:19:04.896 10586.585 - 10636.997: 98.9545% ( 6) 00:19:04.896 10636.997 - 10687.409: 99.0005% ( 8) 00:19:04.896 10687.409 - 10737.822: 99.0292% ( 5) 00:19:04.896 10737.822 - 10788.234: 99.0464% ( 3) 00:19:04.896 10788.234 - 10838.646: 99.1039% ( 10) 00:19:04.896 10838.646 - 10889.058: 99.1268% ( 4) 00:19:04.896 10889.058 - 10939.471: 99.1498% ( 4) 00:19:04.896 10939.471 - 10989.883: 99.1728% ( 4) 00:19:04.896 10989.883 - 11040.295: 99.2015% ( 5) 00:19:04.896 11040.295 - 11090.708: 99.2188% ( 3) 00:19:04.896 11090.708 - 11141.120: 99.2302% ( 2) 00:19:04.896 11141.120 - 11191.532: 99.2417% ( 2) 00:19:04.896 11191.532 - 11241.945: 99.2532% ( 2) 00:19:04.896 11241.945 - 11292.357: 99.2590% ( 1) 00:19:04.896 11292.357 - 11342.769: 99.2647% ( 1) 00:19:04.896 20870.695 - 20971.520: 99.2705% ( 1) 00:19:04.896 20971.520 - 21072.345: 99.2819% ( 2) 00:19:04.896 21072.345 - 21173.169: 99.2992% ( 3) 00:19:04.896 21173.169 - 21273.994: 99.3164% ( 3) 00:19:04.896 21273.994 - 21374.818: 99.3394% ( 4) 00:19:04.896 21374.818 - 21475.643: 99.3624% ( 4) 00:19:04.896 21475.643 - 21576.468: 99.3796% ( 3) 00:19:04.896 21576.468 - 21677.292: 99.3968% ( 3) 00:19:04.896 21677.292 - 21778.117: 99.4141% ( 3) 00:19:04.896 21778.117 - 21878.942: 99.4370% ( 4) 00:19:04.896 21878.942 - 21979.766: 99.4485% ( 2) 00:19:04.896 21979.766 - 22080.591: 99.4773% ( 5) 00:19:04.896 22080.591 - 22181.415: 99.4945% ( 3) 00:19:04.896 22181.415 - 22282.240: 99.5232% ( 5) 00:19:04.896 22282.240 - 22383.065: 99.5347% ( 2) 00:19:04.896 22383.065 - 22483.889: 99.5577% ( 4) 00:19:04.896 22483.889 - 22584.714: 99.5749% ( 3) 00:19:04.896 22584.714 - 22685.538: 99.5979% ( 4) 00:19:04.896 22685.538 - 22786.363: 99.6151% ( 3) 00:19:04.896 22786.363 - 22887.188: 99.6324% ( 3) 00:19:04.896 26819.348 - 27020.997: 99.6726% ( 7) 00:19:04.896 27020.997 - 27222.646: 99.7185% ( 8) 00:19:04.896 27222.646 - 27424.295: 99.7587% ( 7) 00:19:04.896 27424.295 - 27625.945: 99.7932% ( 6) 00:19:04.896 27625.945 - 27827.594: 99.8392% ( 8) 00:19:04.896 27827.594 - 28029.243: 99.8851% ( 8) 00:19:04.896 28029.243 - 28230.892: 99.9311% ( 8) 00:19:04.896 28230.892 - 28432.542: 99.9770% ( 8) 00:19:04.896 28432.542 - 28634.191: 100.0000% ( 4) 00:19:04.896 00:19:04.896 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:04.896 ============================================================================== 00:19:04.896 Range in us Cumulative IO count 00:19:04.896 6200.714 - 6225.920: 0.0057% ( 1) 00:19:04.896 6225.920 - 6251.126: 0.0115% ( 1) 00:19:04.896 6301.538 - 6326.745: 0.0287% ( 3) 00:19:04.896 6326.745 - 6351.951: 0.0402% ( 2) 00:19:04.896 6351.951 - 6377.157: 0.0689% ( 5) 00:19:04.896 6377.157 - 6402.363: 0.0919% ( 4) 00:19:04.896 6402.363 - 6427.569: 0.1379% ( 8) 00:19:04.896 6427.569 - 6452.775: 0.2240% ( 15) 00:19:04.896 6452.775 - 6503.188: 0.4768% ( 44) 00:19:04.896 6503.188 - 6553.600: 1.1719% ( 121) 00:19:04.896 6553.600 - 6604.012: 2.0508% ( 153) 00:19:04.896 6604.012 - 6654.425: 3.3203% ( 221) 00:19:04.896 6654.425 - 6704.837: 5.5664% ( 391) 00:19:04.896 6704.837 - 6755.249: 8.8523% ( 572) 00:19:04.896 6755.249 - 6805.662: 12.7413% ( 677) 00:19:04.896 6805.662 - 6856.074: 18.1066% ( 934) 00:19:04.896 6856.074 - 6906.486: 24.3222% ( 1082) 00:19:04.896 6906.486 - 6956.898: 31.5832% ( 1264) 00:19:04.896 6956.898 - 7007.311: 39.0912% ( 1307) 00:19:04.896 7007.311 - 7057.723: 45.5480% ( 1124) 00:19:04.896 7057.723 - 7108.135: 52.0048% ( 1124) 00:19:04.896 7108.135 - 7158.548: 57.5253% ( 961) 00:19:04.896 7158.548 - 7208.960: 62.3679% ( 843) 00:19:04.896 7208.960 - 7259.372: 66.7739% ( 767) 00:19:04.896 7259.372 - 7309.785: 70.4791% ( 645) 00:19:04.896 7309.785 - 7360.197: 73.2020% ( 474) 00:19:04.896 7360.197 - 7410.609: 75.4596% ( 393) 00:19:04.896 7410.609 - 7461.022: 77.1197% ( 289) 00:19:04.896 7461.022 - 7511.434: 79.3199% ( 383) 00:19:04.896 7511.434 - 7561.846: 80.8307% ( 263) 00:19:04.896 7561.846 - 7612.258: 81.8589% ( 179) 00:19:04.896 7612.258 - 7662.671: 82.9331% ( 187) 00:19:04.896 7662.671 - 7713.083: 84.1682% ( 215) 00:19:04.896 7713.083 - 7763.495: 85.1505% ( 171) 00:19:04.896 7763.495 - 7813.908: 86.4602% ( 228) 00:19:04.896 7813.908 - 7864.320: 87.7068% ( 217) 00:19:04.896 7864.320 - 7914.732: 88.3215% ( 107) 00:19:04.896 7914.732 - 7965.145: 89.1372% ( 142) 00:19:04.896 7965.145 - 8015.557: 90.2574% ( 195) 00:19:04.896 8015.557 - 8065.969: 91.3660% ( 193) 00:19:04.896 8065.969 - 8116.382: 91.9922% ( 109) 00:19:04.896 8116.382 - 8166.794: 92.6298% ( 111) 00:19:04.896 8166.794 - 8217.206: 93.1181% ( 85) 00:19:04.896 8217.206 - 8267.618: 93.5834% ( 81) 00:19:04.896 8267.618 - 8318.031: 93.8419% ( 45) 00:19:04.896 8318.031 - 8368.443: 94.0774% ( 41) 00:19:04.896 8368.443 - 8418.855: 94.2727% ( 34) 00:19:04.896 8418.855 - 8469.268: 94.5312% ( 45) 00:19:04.896 8469.268 - 8519.680: 94.9276% ( 69) 00:19:04.896 8519.680 - 8570.092: 95.1402% ( 37) 00:19:04.896 8570.092 - 8620.505: 95.3125% ( 30) 00:19:04.896 8620.505 - 8670.917: 95.4676% ( 27) 00:19:04.896 8670.917 - 8721.329: 95.5653% ( 17) 00:19:04.896 8721.329 - 8771.742: 95.6457% ( 14) 00:19:04.896 8771.742 - 8822.154: 95.8984% ( 44) 00:19:04.896 8822.154 - 8872.566: 96.0248% ( 22) 00:19:04.896 8872.566 - 8922.978: 96.1914% ( 29) 00:19:04.896 8922.978 - 8973.391: 96.5303% ( 59) 00:19:04.896 8973.391 - 9023.803: 96.8118% ( 49) 00:19:04.896 9023.803 - 9074.215: 97.0416% ( 40) 00:19:04.896 9074.215 - 9124.628: 97.2024% ( 28) 00:19:04.896 9124.628 - 9175.040: 97.2943% ( 16) 00:19:04.896 9175.040 - 9225.452: 97.3460% ( 9) 00:19:04.896 9225.452 - 9275.865: 97.3863% ( 7) 00:19:04.896 9275.865 - 9326.277: 97.4380% ( 9) 00:19:04.896 9326.277 - 9376.689: 97.5011% ( 11) 00:19:04.896 9376.689 - 9427.102: 97.5643% ( 11) 00:19:04.896 9427.102 - 9477.514: 97.6333% ( 12) 00:19:04.896 9477.514 - 9527.926: 97.9320% ( 52) 00:19:04.896 9527.926 - 9578.338: 98.0067% ( 13) 00:19:04.896 9578.338 - 9628.751: 98.0641% ( 10) 00:19:04.896 9628.751 - 9679.163: 98.0928% ( 5) 00:19:04.896 9679.163 - 9729.575: 98.1216% ( 5) 00:19:04.896 9729.575 - 9779.988: 98.1445% ( 4) 00:19:04.896 9779.988 - 9830.400: 98.2479% ( 18) 00:19:04.896 9830.400 - 9880.812: 98.3284% ( 14) 00:19:04.896 9880.812 - 9931.225: 98.4088% ( 14) 00:19:04.896 9931.225 - 9981.637: 98.4318% ( 4) 00:19:04.896 9981.637 - 10032.049: 98.4605% ( 5) 00:19:04.896 10032.049 - 10082.462: 98.5007% ( 7) 00:19:04.896 10082.462 - 10132.874: 98.5466% ( 8) 00:19:04.896 10132.874 - 10183.286: 98.5754% ( 5) 00:19:04.896 10183.286 - 10233.698: 98.6213% ( 8) 00:19:04.896 10233.698 - 10284.111: 98.6788% ( 10) 00:19:04.896 10284.111 - 10334.523: 98.7247% ( 8) 00:19:04.896 10334.523 - 10384.935: 98.7822% ( 10) 00:19:04.896 10384.935 - 10435.348: 98.8281% ( 8) 00:19:04.896 10435.348 - 10485.760: 98.8798% ( 9) 00:19:04.896 10485.760 - 10536.172: 98.9143% ( 6) 00:19:04.896 10536.172 - 10586.585: 98.9545% ( 7) 00:19:04.896 10586.585 - 10636.997: 98.9717% ( 3) 00:19:04.896 10636.997 - 10687.409: 98.9947% ( 4) 00:19:04.896 10687.409 - 10737.822: 99.0177% ( 4) 00:19:04.896 10737.822 - 10788.234: 99.0407% ( 4) 00:19:04.896 10788.234 - 10838.646: 99.0522% ( 2) 00:19:04.896 10838.646 - 10889.058: 99.0751% ( 4) 00:19:04.896 10889.058 - 10939.471: 99.1039% ( 5) 00:19:04.896 10939.471 - 10989.883: 99.1326% ( 5) 00:19:04.896 10989.883 - 11040.295: 99.1498% ( 3) 00:19:04.896 11040.295 - 11090.708: 99.1843% ( 6) 00:19:04.896 11090.708 - 11141.120: 99.2073% ( 4) 00:19:04.896 11141.120 - 11191.532: 99.2417% ( 6) 00:19:04.896 11191.532 - 11241.945: 99.2590% ( 3) 00:19:04.896 11241.945 - 11292.357: 99.2647% ( 1) 00:19:04.896 20164.923 - 20265.748: 99.2877% ( 4) 00:19:04.896 20265.748 - 20366.572: 99.3049% ( 3) 00:19:04.896 20366.572 - 20467.397: 99.3279% ( 4) 00:19:04.896 20467.397 - 20568.222: 99.3509% ( 4) 00:19:04.897 20568.222 - 20669.046: 99.3739% ( 4) 00:19:04.897 20669.046 - 20769.871: 99.3968% ( 4) 00:19:04.897 20769.871 - 20870.695: 99.4198% ( 4) 00:19:04.897 20870.695 - 20971.520: 99.4428% ( 4) 00:19:04.897 20971.520 - 21072.345: 99.4600% ( 3) 00:19:04.897 21072.345 - 21173.169: 99.4830% ( 4) 00:19:04.897 21173.169 - 21273.994: 99.5060% ( 4) 00:19:04.897 21273.994 - 21374.818: 99.5290% ( 4) 00:19:04.897 21374.818 - 21475.643: 99.5519% ( 4) 00:19:04.897 21475.643 - 21576.468: 99.5692% ( 3) 00:19:04.897 21576.468 - 21677.292: 99.5921% ( 4) 00:19:04.897 21677.292 - 21778.117: 99.6151% ( 4) 00:19:04.897 21778.117 - 21878.942: 99.6324% ( 3) 00:19:04.897 25710.277 - 25811.102: 99.6438% ( 2) 00:19:04.897 25811.102 - 26012.751: 99.6955% ( 9) 00:19:04.897 26012.751 - 26214.400: 99.7358% ( 7) 00:19:04.897 26214.400 - 26416.049: 99.7875% ( 9) 00:19:04.897 26416.049 - 26617.698: 99.8334% ( 8) 00:19:04.897 26617.698 - 26819.348: 99.8794% ( 8) 00:19:04.897 26819.348 - 27020.997: 99.9253% ( 8) 00:19:04.897 27020.997 - 27222.646: 99.9655% ( 7) 00:19:04.897 27222.646 - 27424.295: 100.0000% ( 6) 00:19:04.897 00:19:04.897 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:04.897 ============================================================================== 00:19:04.897 Range in us Cumulative IO count 00:19:04.897 6200.714 - 6225.920: 0.0057% ( 1) 00:19:04.897 6251.126 - 6276.332: 0.0230% ( 3) 00:19:04.897 6276.332 - 6301.538: 0.0345% ( 2) 00:19:04.897 6301.538 - 6326.745: 0.0689% ( 6) 00:19:04.897 6326.745 - 6351.951: 0.0804% ( 2) 00:19:04.897 6351.951 - 6377.157: 0.1264% ( 8) 00:19:04.897 6377.157 - 6402.363: 0.1781% ( 9) 00:19:04.897 6402.363 - 6427.569: 0.2700% ( 16) 00:19:04.897 6427.569 - 6452.775: 0.3849% ( 20) 00:19:04.897 6452.775 - 6503.188: 0.8272% ( 77) 00:19:04.897 6503.188 - 6553.600: 1.2351% ( 71) 00:19:04.897 6553.600 - 6604.012: 2.2001% ( 168) 00:19:04.897 6604.012 - 6654.425: 3.5099% ( 228) 00:19:04.897 6654.425 - 6704.837: 5.7732% ( 394) 00:19:04.897 6704.837 - 6755.249: 8.8523% ( 536) 00:19:04.897 6755.249 - 6805.662: 13.2410% ( 764) 00:19:04.897 6805.662 - 6856.074: 18.8017% ( 968) 00:19:04.897 6856.074 - 6906.486: 24.3451% ( 965) 00:19:04.897 6906.486 - 6956.898: 31.7038% ( 1281) 00:19:04.897 6956.898 - 7007.311: 39.0165% ( 1273) 00:19:04.897 7007.311 - 7057.723: 45.7089% ( 1165) 00:19:04.897 7057.723 - 7108.135: 51.7463% ( 1051) 00:19:04.897 7108.135 - 7158.548: 57.4449% ( 992) 00:19:04.897 7158.548 - 7208.960: 62.3277% ( 850) 00:19:04.897 7208.960 - 7259.372: 66.2626% ( 685) 00:19:04.897 7259.372 - 7309.785: 69.3819% ( 543) 00:19:04.897 7309.785 - 7360.197: 71.8176% ( 424) 00:19:04.897 7360.197 - 7410.609: 74.3049% ( 433) 00:19:04.897 7410.609 - 7461.022: 76.5051% ( 383) 00:19:04.897 7461.022 - 7511.434: 78.4697% ( 342) 00:19:04.897 7511.434 - 7561.846: 80.0666% ( 278) 00:19:04.897 7561.846 - 7612.258: 81.6923% ( 283) 00:19:04.897 7612.258 - 7662.671: 83.2663% ( 274) 00:19:04.897 7662.671 - 7713.083: 84.9897% ( 300) 00:19:04.897 7713.083 - 7763.495: 86.2764% ( 224) 00:19:04.897 7763.495 - 7813.908: 87.8102% ( 267) 00:19:04.897 7813.908 - 7864.320: 88.8557% ( 182) 00:19:04.897 7864.320 - 7914.732: 89.9012% ( 182) 00:19:04.897 7914.732 - 7965.145: 90.7858% ( 154) 00:19:04.897 7965.145 - 8015.557: 91.6820% ( 156) 00:19:04.897 8015.557 - 8065.969: 92.4805% ( 139) 00:19:04.897 8065.969 - 8116.382: 93.0377% ( 97) 00:19:04.897 8116.382 - 8166.794: 93.4053% ( 64) 00:19:04.897 8166.794 - 8217.206: 93.7787% ( 65) 00:19:04.897 8217.206 - 8267.618: 93.9970% ( 38) 00:19:04.897 8267.618 - 8318.031: 94.2383% ( 42) 00:19:04.897 8318.031 - 8368.443: 94.3761% ( 24) 00:19:04.897 8368.443 - 8418.855: 94.5025% ( 22) 00:19:04.897 8418.855 - 8469.268: 94.6864% ( 32) 00:19:04.897 8469.268 - 8519.680: 94.7725% ( 15) 00:19:04.897 8519.680 - 8570.092: 94.8529% ( 14) 00:19:04.897 8570.092 - 8620.505: 94.9276% ( 13) 00:19:04.897 8620.505 - 8670.917: 95.0253% ( 17) 00:19:04.897 8670.917 - 8721.329: 95.1172% ( 16) 00:19:04.897 8721.329 - 8771.742: 95.4619% ( 60) 00:19:04.897 8771.742 - 8822.154: 95.8008% ( 59) 00:19:04.897 8822.154 - 8872.566: 96.2546% ( 79) 00:19:04.897 8872.566 - 8922.978: 96.4212% ( 29) 00:19:04.897 8922.978 - 8973.391: 96.5648% ( 25) 00:19:04.897 8973.391 - 9023.803: 96.6510% ( 15) 00:19:04.897 9023.803 - 9074.215: 96.7256% ( 13) 00:19:04.897 9074.215 - 9124.628: 96.8176% ( 16) 00:19:04.897 9124.628 - 9175.040: 96.9037% ( 15) 00:19:04.897 9175.040 - 9225.452: 96.9841% ( 14) 00:19:04.897 9225.452 - 9275.865: 97.0761% ( 16) 00:19:04.897 9275.865 - 9326.277: 97.1737% ( 17) 00:19:04.897 9326.277 - 9376.689: 97.3173% ( 25) 00:19:04.897 9376.689 - 9427.102: 97.5241% ( 36) 00:19:04.897 9427.102 - 9477.514: 97.7539% ( 40) 00:19:04.897 9477.514 - 9527.926: 97.8975% ( 25) 00:19:04.897 9527.926 - 9578.338: 98.0182% ( 21) 00:19:04.897 9578.338 - 9628.751: 98.0526% ( 6) 00:19:04.897 9628.751 - 9679.163: 98.0699% ( 3) 00:19:04.897 9679.163 - 9729.575: 98.0871% ( 3) 00:19:04.897 9729.575 - 9779.988: 98.1043% ( 3) 00:19:04.897 9779.988 - 9830.400: 98.1330% ( 5) 00:19:04.897 9830.400 - 9880.812: 98.1905% ( 10) 00:19:04.897 9880.812 - 9931.225: 98.2537% ( 11) 00:19:04.897 9931.225 - 9981.637: 98.3398% ( 15) 00:19:04.897 9981.637 - 10032.049: 98.3858% ( 8) 00:19:04.897 10032.049 - 10082.462: 98.4145% ( 5) 00:19:04.897 10082.462 - 10132.874: 98.5409% ( 22) 00:19:04.897 10132.874 - 10183.286: 98.6098% ( 12) 00:19:04.897 10183.286 - 10233.698: 98.6328% ( 4) 00:19:04.897 10233.698 - 10284.111: 98.6615% ( 5) 00:19:04.897 10284.111 - 10334.523: 98.6903% ( 5) 00:19:04.897 10334.523 - 10384.935: 98.7190% ( 5) 00:19:04.897 10384.935 - 10435.348: 98.7420% ( 4) 00:19:04.897 10435.348 - 10485.760: 98.7649% ( 4) 00:19:04.897 10485.760 - 10536.172: 98.7937% ( 5) 00:19:04.897 10536.172 - 10586.585: 98.8511% ( 10) 00:19:04.897 10586.585 - 10636.997: 98.8971% ( 8) 00:19:04.897 10636.997 - 10687.409: 98.9373% ( 7) 00:19:04.897 10687.409 - 10737.822: 98.9890% ( 9) 00:19:04.897 10737.822 - 10788.234: 99.0234% ( 6) 00:19:04.897 10788.234 - 10838.646: 99.0407% ( 3) 00:19:04.897 10838.646 - 10889.058: 99.0522% ( 2) 00:19:04.897 10889.058 - 10939.471: 99.0636% ( 2) 00:19:04.897 10939.471 - 10989.883: 99.0751% ( 2) 00:19:04.897 10989.883 - 11040.295: 99.0924% ( 3) 00:19:04.897 11040.295 - 11090.708: 99.1039% ( 2) 00:19:04.897 11090.708 - 11141.120: 99.1211% ( 3) 00:19:04.897 11141.120 - 11191.532: 99.1383% ( 3) 00:19:04.897 11191.532 - 11241.945: 99.1498% ( 2) 00:19:04.897 11241.945 - 11292.357: 99.1670% ( 3) 00:19:04.897 11292.357 - 11342.769: 99.1843% ( 3) 00:19:04.897 11342.769 - 11393.182: 99.2015% ( 3) 00:19:04.897 11393.182 - 11443.594: 99.2188% ( 3) 00:19:04.897 11443.594 - 11494.006: 99.2360% ( 3) 00:19:04.897 11494.006 - 11544.418: 99.2532% ( 3) 00:19:04.897 11544.418 - 11594.831: 99.2647% ( 2) 00:19:04.897 20064.098 - 20164.923: 99.3164% ( 9) 00:19:04.897 20164.923 - 20265.748: 99.3624% ( 8) 00:19:04.897 20265.748 - 20366.572: 99.4083% ( 8) 00:19:04.897 20366.572 - 20467.397: 99.4428% ( 6) 00:19:04.897 20467.397 - 20568.222: 99.4658% ( 4) 00:19:04.897 20568.222 - 20669.046: 99.4830% ( 3) 00:19:04.897 20669.046 - 20769.871: 99.5060% ( 4) 00:19:04.897 20769.871 - 20870.695: 99.5232% ( 3) 00:19:04.897 20870.695 - 20971.520: 99.5462% ( 4) 00:19:04.897 20971.520 - 21072.345: 99.5634% ( 3) 00:19:04.897 21072.345 - 21173.169: 99.5864% ( 4) 00:19:04.897 21173.169 - 21273.994: 99.6094% ( 4) 00:19:04.897 21273.994 - 21374.818: 99.6324% ( 4) 00:19:04.897 24298.732 - 24399.557: 99.6611% ( 5) 00:19:04.897 24399.557 - 24500.382: 99.6898% ( 5) 00:19:04.897 24500.382 - 24601.206: 99.7358% ( 8) 00:19:04.897 24601.206 - 24702.031: 99.7932% ( 10) 00:19:04.897 25206.154 - 25306.978: 99.8047% ( 2) 00:19:04.897 25306.978 - 25407.803: 99.8219% ( 3) 00:19:04.897 25407.803 - 25508.628: 99.8392% ( 3) 00:19:04.897 25508.628 - 25609.452: 99.8621% ( 4) 00:19:04.897 25609.452 - 25710.277: 99.8736% ( 2) 00:19:04.897 25710.277 - 25811.102: 99.8909% ( 3) 00:19:04.897 25811.102 - 26012.751: 99.9311% ( 7) 00:19:04.897 26012.751 - 26214.400: 99.9713% ( 7) 00:19:04.897 26214.400 - 26416.049: 100.0000% ( 5) 00:19:04.897 00:19:04.897 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:04.897 ============================================================================== 00:19:04.897 Range in us Cumulative IO count 00:19:04.897 6074.683 - 6099.889: 0.0057% ( 1) 00:19:04.897 6150.302 - 6175.508: 0.0115% ( 1) 00:19:04.897 6225.920 - 6251.126: 0.0172% ( 1) 00:19:04.897 6276.332 - 6301.538: 0.0402% ( 4) 00:19:04.897 6301.538 - 6326.745: 0.0517% ( 2) 00:19:04.897 6326.745 - 6351.951: 0.0919% ( 7) 00:19:04.897 6351.951 - 6377.157: 0.1264% ( 6) 00:19:04.897 6377.157 - 6402.363: 0.1781% ( 9) 00:19:04.897 6402.363 - 6427.569: 0.2183% ( 7) 00:19:04.897 6427.569 - 6452.775: 0.2987% ( 14) 00:19:04.897 6452.775 - 6503.188: 0.6032% ( 53) 00:19:04.897 6503.188 - 6553.600: 1.0800% ( 83) 00:19:04.897 6553.600 - 6604.012: 1.8325% ( 131) 00:19:04.897 6604.012 - 6654.425: 3.3203% ( 259) 00:19:04.897 6654.425 - 6704.837: 5.5951% ( 396) 00:19:04.897 6704.837 - 6755.249: 8.7661% ( 552) 00:19:04.897 6755.249 - 6805.662: 13.2008% ( 772) 00:19:04.897 6805.662 - 6856.074: 18.7155% ( 960) 00:19:04.898 6856.074 - 6906.486: 24.7128% ( 1044) 00:19:04.898 6906.486 - 6956.898: 30.6813% ( 1039) 00:19:04.898 6956.898 - 7007.311: 37.7757% ( 1235) 00:19:04.898 7007.311 - 7057.723: 44.7725% ( 1218) 00:19:04.898 7057.723 - 7108.135: 51.8957% ( 1240) 00:19:04.898 7108.135 - 7158.548: 57.3127% ( 943) 00:19:04.898 7158.548 - 7208.960: 62.1266% ( 838) 00:19:04.898 7208.960 - 7259.372: 65.8720% ( 652) 00:19:04.898 7259.372 - 7309.785: 68.7155% ( 495) 00:19:04.898 7309.785 - 7360.197: 71.9037% ( 555) 00:19:04.898 7360.197 - 7410.609: 74.4428% ( 442) 00:19:04.898 7410.609 - 7461.022: 76.6946% ( 392) 00:19:04.898 7461.022 - 7511.434: 78.5271% ( 319) 00:19:04.898 7511.434 - 7561.846: 80.2505% ( 300) 00:19:04.898 7561.846 - 7612.258: 82.3070% ( 358) 00:19:04.898 7612.258 - 7662.671: 84.1854% ( 327) 00:19:04.898 7662.671 - 7713.083: 85.8456% ( 289) 00:19:04.898 7713.083 - 7763.495: 86.9370% ( 190) 00:19:04.898 7763.495 - 7813.908: 88.1261% ( 207) 00:19:04.898 7813.908 - 7864.320: 88.9074% ( 136) 00:19:04.898 7864.320 - 7914.732: 89.9012% ( 173) 00:19:04.898 7914.732 - 7965.145: 90.9467% ( 182) 00:19:04.898 7965.145 - 8015.557: 91.6418% ( 121) 00:19:04.898 8015.557 - 8065.969: 92.1703% ( 92) 00:19:04.898 8065.969 - 8116.382: 92.6643% ( 86) 00:19:04.898 8116.382 - 8166.794: 93.1009% ( 76) 00:19:04.898 8166.794 - 8217.206: 93.3824% ( 49) 00:19:04.898 8217.206 - 8267.618: 93.7845% ( 70) 00:19:04.898 8267.618 - 8318.031: 93.9798% ( 34) 00:19:04.898 8318.031 - 8368.443: 94.1693% ( 33) 00:19:04.898 8368.443 - 8418.855: 94.3359% ( 29) 00:19:04.898 8418.855 - 8469.268: 94.5255% ( 33) 00:19:04.898 8469.268 - 8519.680: 94.6806% ( 27) 00:19:04.898 8519.680 - 8570.092: 94.8874% ( 36) 00:19:04.898 8570.092 - 8620.505: 95.1861% ( 52) 00:19:04.898 8620.505 - 8670.917: 95.4906% ( 53) 00:19:04.898 8670.917 - 8721.329: 95.8410% ( 61) 00:19:04.898 8721.329 - 8771.742: 96.0076% ( 29) 00:19:04.898 8771.742 - 8822.154: 96.2029% ( 34) 00:19:04.898 8822.154 - 8872.566: 96.4614% ( 45) 00:19:04.898 8872.566 - 8922.978: 96.5820% ( 21) 00:19:04.898 8922.978 - 8973.391: 96.7084% ( 22) 00:19:04.898 8973.391 - 9023.803: 96.7773% ( 12) 00:19:04.898 9023.803 - 9074.215: 96.8348% ( 10) 00:19:04.898 9074.215 - 9124.628: 96.8922% ( 10) 00:19:04.898 9124.628 - 9175.040: 96.9382% ( 8) 00:19:04.898 9175.040 - 9225.452: 97.0646% ( 22) 00:19:04.898 9225.452 - 9275.865: 97.1680% ( 18) 00:19:04.898 9275.865 - 9326.277: 97.2139% ( 8) 00:19:04.898 9326.277 - 9376.689: 97.2484% ( 6) 00:19:04.898 9376.689 - 9427.102: 97.2886% ( 7) 00:19:04.898 9427.102 - 9477.514: 97.3460% ( 10) 00:19:04.898 9477.514 - 9527.926: 97.5011% ( 27) 00:19:04.898 9527.926 - 9578.338: 97.5931% ( 16) 00:19:04.898 9578.338 - 9628.751: 97.6448% ( 9) 00:19:04.898 9628.751 - 9679.163: 97.7194% ( 13) 00:19:04.898 9679.163 - 9729.575: 97.8056% ( 15) 00:19:04.898 9729.575 - 9779.988: 97.9607% ( 27) 00:19:04.898 9779.988 - 9830.400: 98.1445% ( 32) 00:19:04.898 9830.400 - 9880.812: 98.2422% ( 17) 00:19:04.898 9880.812 - 9931.225: 98.3743% ( 23) 00:19:04.898 9931.225 - 9981.637: 98.4662% ( 16) 00:19:04.898 9981.637 - 10032.049: 98.6156% ( 26) 00:19:04.898 10032.049 - 10082.462: 98.6558% ( 7) 00:19:04.898 10082.462 - 10132.874: 98.6845% ( 5) 00:19:04.898 10132.874 - 10183.286: 98.7190% ( 6) 00:19:04.898 10183.286 - 10233.698: 98.7420% ( 4) 00:19:04.898 10233.698 - 10284.111: 98.7707% ( 5) 00:19:04.898 10284.111 - 10334.523: 98.7937% ( 4) 00:19:04.898 10334.523 - 10384.935: 98.8224% ( 5) 00:19:04.898 10384.935 - 10435.348: 98.8683% ( 8) 00:19:04.898 10435.348 - 10485.760: 98.9085% ( 7) 00:19:04.898 10485.760 - 10536.172: 98.9373% ( 5) 00:19:04.898 10536.172 - 10586.585: 98.9545% ( 3) 00:19:04.898 10586.585 - 10636.997: 98.9717% ( 3) 00:19:04.898 10636.997 - 10687.409: 98.9832% ( 2) 00:19:04.898 10687.409 - 10737.822: 99.0005% ( 3) 00:19:04.898 10737.822 - 10788.234: 99.0177% ( 3) 00:19:04.898 10788.234 - 10838.646: 99.0349% ( 3) 00:19:04.898 10838.646 - 10889.058: 99.0522% ( 3) 00:19:04.898 10889.058 - 10939.471: 99.0694% ( 3) 00:19:04.898 10939.471 - 10989.883: 99.0809% ( 2) 00:19:04.898 10989.883 - 11040.295: 99.0981% ( 3) 00:19:04.898 11040.295 - 11090.708: 99.1153% ( 3) 00:19:04.898 11090.708 - 11141.120: 99.1326% ( 3) 00:19:04.898 11141.120 - 11191.532: 99.1498% ( 3) 00:19:04.898 11191.532 - 11241.945: 99.1670% ( 3) 00:19:04.898 11241.945 - 11292.357: 99.1900% ( 4) 00:19:04.898 11292.357 - 11342.769: 99.2073% ( 3) 00:19:04.898 11342.769 - 11393.182: 99.2188% ( 2) 00:19:04.898 11393.182 - 11443.594: 99.2360% ( 3) 00:19:04.898 11443.594 - 11494.006: 99.2532% ( 3) 00:19:04.898 11494.006 - 11544.418: 99.2647% ( 2) 00:19:04.898 18450.905 - 18551.729: 99.2992% ( 6) 00:19:04.898 18551.729 - 18652.554: 99.3394% ( 7) 00:19:04.898 18652.554 - 18753.378: 99.3968% ( 10) 00:19:04.898 18753.378 - 18854.203: 99.4543% ( 10) 00:19:04.898 18854.203 - 18955.028: 99.4715% ( 3) 00:19:04.898 18955.028 - 19055.852: 99.4945% ( 4) 00:19:04.898 19055.852 - 19156.677: 99.5060% ( 2) 00:19:04.898 19156.677 - 19257.502: 99.5290% ( 4) 00:19:04.898 19257.502 - 19358.326: 99.5519% ( 4) 00:19:04.898 19358.326 - 19459.151: 99.5692% ( 3) 00:19:04.898 19459.151 - 19559.975: 99.5921% ( 4) 00:19:04.898 19559.975 - 19660.800: 99.6151% ( 4) 00:19:04.898 19660.800 - 19761.625: 99.6324% ( 3) 00:19:04.898 22786.363 - 22887.188: 99.6381% ( 1) 00:19:04.898 23693.785 - 23794.609: 99.6611% ( 4) 00:19:04.898 23794.609 - 23895.434: 99.6783% ( 3) 00:19:04.898 23895.434 - 23996.258: 99.7013% ( 4) 00:19:04.898 23996.258 - 24097.083: 99.7243% ( 4) 00:19:04.898 24097.083 - 24197.908: 99.7415% ( 3) 00:19:04.898 24197.908 - 24298.732: 99.7645% ( 4) 00:19:04.898 24298.732 - 24399.557: 99.7875% ( 4) 00:19:04.898 24399.557 - 24500.382: 99.8104% ( 4) 00:19:04.898 24500.382 - 24601.206: 99.8334% ( 4) 00:19:04.898 24601.206 - 24702.031: 99.8564% ( 4) 00:19:04.898 24702.031 - 24802.855: 99.8794% ( 4) 00:19:04.898 24802.855 - 24903.680: 99.9023% ( 4) 00:19:04.898 24903.680 - 25004.505: 99.9253% ( 4) 00:19:04.898 25004.505 - 25105.329: 99.9483% ( 4) 00:19:04.898 25105.329 - 25206.154: 99.9713% ( 4) 00:19:04.898 25206.154 - 25306.978: 100.0000% ( 5) 00:19:04.898 00:19:04.898 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:04.898 ============================================================================== 00:19:04.898 Range in us Cumulative IO count 00:19:04.898 6049.477 - 6074.683: 0.0057% ( 1) 00:19:04.898 6074.683 - 6099.889: 0.0115% ( 1) 00:19:04.898 6225.920 - 6251.126: 0.0172% ( 1) 00:19:04.898 6251.126 - 6276.332: 0.0517% ( 6) 00:19:04.898 6276.332 - 6301.538: 0.0862% ( 6) 00:19:04.898 6301.538 - 6326.745: 0.1264% ( 7) 00:19:04.898 6326.745 - 6351.951: 0.1666% ( 7) 00:19:04.898 6351.951 - 6377.157: 0.2183% ( 9) 00:19:04.898 6377.157 - 6402.363: 0.2757% ( 10) 00:19:04.898 6402.363 - 6427.569: 0.3217% ( 8) 00:19:04.898 6427.569 - 6452.775: 0.3906% ( 12) 00:19:04.898 6452.775 - 6503.188: 0.8157% ( 74) 00:19:04.898 6503.188 - 6553.600: 1.1202% ( 53) 00:19:04.898 6553.600 - 6604.012: 1.8267% ( 123) 00:19:04.898 6604.012 - 6654.425: 2.9986% ( 204) 00:19:04.898 6654.425 - 6704.837: 5.0666% ( 360) 00:19:04.898 6704.837 - 6755.249: 8.3927% ( 579) 00:19:04.898 6755.249 - 6805.662: 13.0400% ( 809) 00:19:04.898 6805.662 - 6856.074: 18.6466% ( 976) 00:19:04.898 6856.074 - 6906.486: 24.9023% ( 1089) 00:19:04.898 6906.486 - 6956.898: 30.7158% ( 1012) 00:19:04.898 6956.898 - 7007.311: 37.6551% ( 1208) 00:19:04.898 7007.311 - 7057.723: 43.9625% ( 1098) 00:19:04.898 7057.723 - 7108.135: 50.3102% ( 1105) 00:19:04.898 7108.135 - 7158.548: 55.7847% ( 953) 00:19:04.898 7158.548 - 7208.960: 61.1500% ( 934) 00:19:04.898 7208.960 - 7259.372: 65.4584% ( 750) 00:19:04.898 7259.372 - 7309.785: 69.2785% ( 665) 00:19:04.898 7309.785 - 7360.197: 72.2369% ( 515) 00:19:04.898 7360.197 - 7410.609: 74.8736% ( 459) 00:19:04.898 7410.609 - 7461.022: 77.2806% ( 419) 00:19:04.898 7461.022 - 7511.434: 79.0786% ( 313) 00:19:04.898 7511.434 - 7561.846: 80.5319% ( 253) 00:19:04.898 7561.846 - 7612.258: 82.1634% ( 284) 00:19:04.898 7612.258 - 7662.671: 84.0188% ( 323) 00:19:04.898 7662.671 - 7713.083: 85.4894% ( 256) 00:19:04.898 7713.083 - 7763.495: 86.5924% ( 192) 00:19:04.898 7763.495 - 7813.908: 88.1721% ( 275) 00:19:04.898 7813.908 - 7864.320: 89.0338% ( 150) 00:19:04.898 7864.320 - 7914.732: 90.0046% ( 169) 00:19:04.898 7914.732 - 7965.145: 90.8261% ( 143) 00:19:04.898 7965.145 - 8015.557: 91.7682% ( 164) 00:19:04.898 8015.557 - 8065.969: 92.3943% ( 109) 00:19:04.898 8065.969 - 8116.382: 92.7734% ( 66) 00:19:04.898 8116.382 - 8166.794: 93.3019% ( 92) 00:19:04.898 8166.794 - 8217.206: 93.5662% ( 46) 00:19:04.898 8217.206 - 8267.618: 93.7328% ( 29) 00:19:04.898 8267.618 - 8318.031: 93.8936% ( 28) 00:19:04.898 8318.031 - 8368.443: 94.1464% ( 44) 00:19:04.898 8368.443 - 8418.855: 94.3647% ( 38) 00:19:04.898 8418.855 - 8469.268: 94.6117% ( 43) 00:19:04.898 8469.268 - 8519.680: 94.8357% ( 39) 00:19:04.898 8519.680 - 8570.092: 95.0770% ( 42) 00:19:04.898 8570.092 - 8620.505: 95.3010% ( 39) 00:19:04.898 8620.505 - 8670.917: 95.6974% ( 69) 00:19:04.898 8670.917 - 8721.329: 96.0191% ( 56) 00:19:04.898 8721.329 - 8771.742: 96.4097% ( 68) 00:19:04.898 8771.742 - 8822.154: 96.5476% ( 24) 00:19:04.898 8822.154 - 8872.566: 96.6452% ( 17) 00:19:04.899 8872.566 - 8922.978: 96.7371% ( 16) 00:19:04.899 8922.978 - 8973.391: 96.8176% ( 14) 00:19:04.899 8973.391 - 9023.803: 96.9612% ( 25) 00:19:04.899 9023.803 - 9074.215: 97.0875% ( 22) 00:19:04.899 9074.215 - 9124.628: 97.1565% ( 12) 00:19:04.899 9124.628 - 9175.040: 97.2197% ( 11) 00:19:04.899 9175.040 - 9225.452: 97.2656% ( 8) 00:19:04.899 9225.452 - 9275.865: 97.2943% ( 5) 00:19:04.899 9275.865 - 9326.277: 97.3346% ( 7) 00:19:04.899 9326.277 - 9376.689: 97.3805% ( 8) 00:19:04.899 9376.689 - 9427.102: 97.4494% ( 12) 00:19:04.899 9427.102 - 9477.514: 97.5069% ( 10) 00:19:04.899 9477.514 - 9527.926: 97.5356% ( 5) 00:19:04.899 9527.926 - 9578.338: 97.5643% ( 5) 00:19:04.899 9578.338 - 9628.751: 97.6045% ( 7) 00:19:04.899 9628.751 - 9679.163: 97.6448% ( 7) 00:19:04.899 9679.163 - 9729.575: 97.7309% ( 15) 00:19:04.899 9729.575 - 9779.988: 97.7999% ( 12) 00:19:04.899 9779.988 - 9830.400: 97.8745% ( 13) 00:19:04.899 9830.400 - 9880.812: 97.9607% ( 15) 00:19:04.899 9880.812 - 9931.225: 98.0641% ( 18) 00:19:04.899 9931.225 - 9981.637: 98.2824% ( 38) 00:19:04.899 9981.637 - 10032.049: 98.4318% ( 26) 00:19:04.899 10032.049 - 10082.462: 98.4835% ( 9) 00:19:04.899 10082.462 - 10132.874: 98.5869% ( 18) 00:19:04.899 10132.874 - 10183.286: 98.6903% ( 18) 00:19:04.899 10183.286 - 10233.698: 98.7822% ( 16) 00:19:04.899 10233.698 - 10284.111: 98.8281% ( 8) 00:19:04.899 10284.111 - 10334.523: 98.8568% ( 5) 00:19:04.899 10334.523 - 10384.935: 98.8741% ( 3) 00:19:04.899 10384.935 - 10435.348: 98.8971% ( 4) 00:19:04.899 10435.348 - 10485.760: 98.9430% ( 8) 00:19:04.899 10485.760 - 10536.172: 98.9717% ( 5) 00:19:04.899 10536.172 - 10586.585: 98.9832% ( 2) 00:19:04.899 10586.585 - 10636.997: 99.0119% ( 5) 00:19:04.899 10636.997 - 10687.409: 99.0234% ( 2) 00:19:04.899 10687.409 - 10737.822: 99.0349% ( 2) 00:19:04.899 10737.822 - 10788.234: 99.0464% ( 2) 00:19:04.899 10788.234 - 10838.646: 99.0579% ( 2) 00:19:04.899 10838.646 - 10889.058: 99.0751% ( 3) 00:19:04.899 10889.058 - 10939.471: 99.0924% ( 3) 00:19:04.899 10939.471 - 10989.883: 99.1096% ( 3) 00:19:04.899 10989.883 - 11040.295: 99.1268% ( 3) 00:19:04.899 11040.295 - 11090.708: 99.1441% ( 3) 00:19:04.899 11090.708 - 11141.120: 99.1613% ( 3) 00:19:04.899 11141.120 - 11191.532: 99.1785% ( 3) 00:19:04.899 11191.532 - 11241.945: 99.1958% ( 3) 00:19:04.899 11241.945 - 11292.357: 99.2130% ( 3) 00:19:04.899 11292.357 - 11342.769: 99.2360% ( 4) 00:19:04.899 11342.769 - 11393.182: 99.2532% ( 3) 00:19:04.899 11393.182 - 11443.594: 99.2647% ( 2) 00:19:04.899 17543.483 - 17644.308: 99.2705% ( 1) 00:19:04.899 17644.308 - 17745.132: 99.2934% ( 4) 00:19:04.899 17745.132 - 17845.957: 99.3107% ( 3) 00:19:04.899 17845.957 - 17946.782: 99.3394% ( 5) 00:19:04.899 17946.782 - 18047.606: 99.3624% ( 4) 00:19:04.899 18047.606 - 18148.431: 99.3853% ( 4) 00:19:04.899 18148.431 - 18249.255: 99.4141% ( 5) 00:19:04.899 18249.255 - 18350.080: 99.4313% ( 3) 00:19:04.899 18350.080 - 18450.905: 99.4543% ( 4) 00:19:04.899 18450.905 - 18551.729: 99.4715% ( 3) 00:19:04.899 18551.729 - 18652.554: 99.4887% ( 3) 00:19:04.899 18652.554 - 18753.378: 99.5175% ( 5) 00:19:04.899 18753.378 - 18854.203: 99.5347% ( 3) 00:19:04.899 18854.203 - 18955.028: 99.5577% ( 4) 00:19:04.899 18955.028 - 19055.852: 99.5807% ( 4) 00:19:04.899 19055.852 - 19156.677: 99.5979% ( 3) 00:19:04.899 19156.677 - 19257.502: 99.6209% ( 4) 00:19:04.899 19257.502 - 19358.326: 99.6324% ( 2) 00:19:04.899 21878.942 - 21979.766: 99.6553% ( 4) 00:19:04.899 21979.766 - 22080.591: 99.6783% ( 4) 00:19:04.899 22080.591 - 22181.415: 99.7185% ( 7) 00:19:04.899 22181.415 - 22282.240: 99.7760% ( 10) 00:19:04.899 22282.240 - 22383.065: 99.8047% ( 5) 00:19:04.899 22383.065 - 22483.889: 99.8219% ( 3) 00:19:04.899 22584.714 - 22685.538: 99.8277% ( 1) 00:19:04.899 22786.363 - 22887.188: 99.8392% ( 2) 00:19:04.899 22887.188 - 22988.012: 99.8564% ( 3) 00:19:04.899 22988.012 - 23088.837: 99.8679% ( 2) 00:19:04.899 23088.837 - 23189.662: 99.8851% ( 3) 00:19:04.899 23189.662 - 23290.486: 99.9023% ( 3) 00:19:04.899 23290.486 - 23391.311: 99.9253% ( 4) 00:19:04.899 23391.311 - 23492.135: 99.9483% ( 4) 00:19:04.899 23492.135 - 23592.960: 99.9713% ( 4) 00:19:04.899 23592.960 - 23693.785: 99.9943% ( 4) 00:19:04.899 23693.785 - 23794.609: 100.0000% ( 1) 00:19:04.899 00:19:04.899 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:04.899 ============================================================================== 00:19:04.899 Range in us Cumulative IO count 00:19:04.899 6125.095 - 6150.302: 0.0057% ( 1) 00:19:04.899 6225.920 - 6251.126: 0.0115% ( 1) 00:19:04.899 6251.126 - 6276.332: 0.0345% ( 4) 00:19:04.899 6276.332 - 6301.538: 0.0517% ( 3) 00:19:04.899 6301.538 - 6326.745: 0.0804% ( 5) 00:19:04.899 6326.745 - 6351.951: 0.0977% ( 3) 00:19:04.899 6351.951 - 6377.157: 0.1091% ( 2) 00:19:04.899 6377.157 - 6402.363: 0.1723% ( 11) 00:19:04.899 6402.363 - 6427.569: 0.2068% ( 6) 00:19:04.899 6427.569 - 6452.775: 0.2872% ( 14) 00:19:04.899 6452.775 - 6503.188: 0.6261% ( 59) 00:19:04.899 6503.188 - 6553.600: 1.1546% ( 92) 00:19:04.899 6553.600 - 6604.012: 1.9416% ( 137) 00:19:04.899 6604.012 - 6654.425: 2.9469% ( 175) 00:19:04.899 6654.425 - 6704.837: 4.7449% ( 313) 00:19:04.899 6704.837 - 6755.249: 8.2089% ( 603) 00:19:04.899 6755.249 - 6805.662: 12.3104% ( 714) 00:19:04.899 6805.662 - 6856.074: 18.1066% ( 1009) 00:19:04.899 6856.074 - 6906.486: 23.9602% ( 1019) 00:19:04.899 6906.486 - 6956.898: 30.6756% ( 1169) 00:19:04.899 6956.898 - 7007.311: 37.9251% ( 1262) 00:19:04.899 7007.311 - 7057.723: 44.2670% ( 1104) 00:19:04.899 7057.723 - 7108.135: 51.0857% ( 1187) 00:19:04.899 7108.135 - 7158.548: 57.1059% ( 1048) 00:19:04.899 7158.548 - 7208.960: 61.7647% ( 811) 00:19:04.899 7208.960 - 7259.372: 66.4924% ( 823) 00:19:04.899 7259.372 - 7309.785: 69.9391% ( 600) 00:19:04.899 7309.785 - 7360.197: 72.6218% ( 467) 00:19:04.899 7360.197 - 7410.609: 75.3389% ( 473) 00:19:04.899 7410.609 - 7461.022: 77.4529% ( 368) 00:19:04.899 7461.022 - 7511.434: 79.1131% ( 289) 00:19:04.899 7511.434 - 7561.846: 80.6353% ( 265) 00:19:04.899 7561.846 - 7612.258: 82.3529% ( 299) 00:19:04.899 7612.258 - 7662.671: 83.7316% ( 240) 00:19:04.899 7662.671 - 7713.083: 84.8403% ( 193) 00:19:04.899 7713.083 - 7763.495: 85.8456% ( 175) 00:19:04.899 7763.495 - 7813.908: 86.9485% ( 192) 00:19:04.899 7813.908 - 7864.320: 88.1836% ( 215) 00:19:04.899 7864.320 - 7914.732: 89.0108% ( 144) 00:19:04.899 7914.732 - 7965.145: 89.9989% ( 172) 00:19:04.899 7965.145 - 8015.557: 91.1477% ( 200) 00:19:04.899 8015.557 - 8065.969: 91.9118% ( 133) 00:19:04.899 8065.969 - 8116.382: 92.6126% ( 122) 00:19:04.899 8116.382 - 8166.794: 93.0377% ( 74) 00:19:04.899 8166.794 - 8217.206: 93.4111% ( 65) 00:19:04.899 8217.206 - 8267.618: 93.8477% ( 76) 00:19:04.899 8267.618 - 8318.031: 94.1521% ( 53) 00:19:04.899 8318.031 - 8368.443: 94.3876% ( 41) 00:19:04.899 8368.443 - 8418.855: 94.5542% ( 29) 00:19:04.899 8418.855 - 8469.268: 94.8529% ( 52) 00:19:04.899 8469.268 - 8519.680: 95.0712% ( 38) 00:19:04.899 8519.680 - 8570.092: 95.2378% ( 29) 00:19:04.899 8570.092 - 8620.505: 95.3872% ( 26) 00:19:04.899 8620.505 - 8670.917: 95.4963% ( 19) 00:19:04.899 8670.917 - 8721.329: 95.6399% ( 25) 00:19:04.899 8721.329 - 8771.742: 96.0650% ( 74) 00:19:04.899 8771.742 - 8822.154: 96.3695% ( 53) 00:19:04.899 8822.154 - 8872.566: 96.5016% ( 23) 00:19:04.899 8872.566 - 8922.978: 96.8405% ( 59) 00:19:04.899 8922.978 - 8973.391: 97.0473% ( 36) 00:19:04.899 8973.391 - 9023.803: 97.2024% ( 27) 00:19:04.899 9023.803 - 9074.215: 97.3058% ( 18) 00:19:04.899 9074.215 - 9124.628: 97.3748% ( 12) 00:19:04.899 9124.628 - 9175.040: 97.4322% ( 10) 00:19:04.899 9175.040 - 9225.452: 97.4667% ( 6) 00:19:04.899 9225.452 - 9275.865: 97.5241% ( 10) 00:19:04.899 9275.865 - 9326.277: 97.5643% ( 7) 00:19:04.900 9326.277 - 9376.689: 97.5988% ( 6) 00:19:04.900 9376.689 - 9427.102: 97.6160% ( 3) 00:19:04.900 9427.102 - 9477.514: 97.6333% ( 3) 00:19:04.900 9477.514 - 9527.926: 97.6448% ( 2) 00:19:04.900 9527.926 - 9578.338: 97.6620% ( 3) 00:19:04.900 9578.338 - 9628.751: 97.6792% ( 3) 00:19:04.900 9628.751 - 9679.163: 97.7022% ( 4) 00:19:04.900 9679.163 - 9729.575: 97.7367% ( 6) 00:19:04.900 9729.575 - 9779.988: 97.7826% ( 8) 00:19:04.900 9779.988 - 9830.400: 97.9148% ( 23) 00:19:04.900 9830.400 - 9880.812: 98.0928% ( 31) 00:19:04.900 9880.812 - 9931.225: 98.1733% ( 14) 00:19:04.900 9931.225 - 9981.637: 98.2422% ( 12) 00:19:04.900 9981.637 - 10032.049: 98.2996% ( 10) 00:19:04.900 10032.049 - 10082.462: 98.3456% ( 8) 00:19:04.900 10082.462 - 10132.874: 98.3915% ( 8) 00:19:04.900 10132.874 - 10183.286: 98.5409% ( 26) 00:19:04.900 10183.286 - 10233.698: 98.6845% ( 25) 00:19:04.900 10233.698 - 10284.111: 98.7649% ( 14) 00:19:04.900 10284.111 - 10334.523: 98.8166% ( 9) 00:19:04.900 10334.523 - 10384.935: 98.8683% ( 9) 00:19:04.900 10384.935 - 10435.348: 98.9143% ( 8) 00:19:04.900 10435.348 - 10485.760: 98.9430% ( 5) 00:19:04.900 10485.760 - 10536.172: 98.9602% ( 3) 00:19:04.900 10536.172 - 10586.585: 98.9832% ( 4) 00:19:04.900 10586.585 - 10636.997: 99.0119% ( 5) 00:19:04.900 10636.997 - 10687.409: 99.0407% ( 5) 00:19:04.900 10687.409 - 10737.822: 99.0636% ( 4) 00:19:04.900 10737.822 - 10788.234: 99.0809% ( 3) 00:19:04.900 10788.234 - 10838.646: 99.0981% ( 3) 00:19:04.900 10838.646 - 10889.058: 99.1153% ( 3) 00:19:04.900 10889.058 - 10939.471: 99.1326% ( 3) 00:19:04.900 10939.471 - 10989.883: 99.1498% ( 3) 00:19:04.900 10989.883 - 11040.295: 99.1670% ( 3) 00:19:04.900 11040.295 - 11090.708: 99.1900% ( 4) 00:19:04.900 11090.708 - 11141.120: 99.2073% ( 3) 00:19:04.900 11141.120 - 11191.532: 99.2245% ( 3) 00:19:04.900 11191.532 - 11241.945: 99.2417% ( 3) 00:19:04.900 11241.945 - 11292.357: 99.2590% ( 3) 00:19:04.900 11292.357 - 11342.769: 99.2647% ( 1) 00:19:04.900 16636.062 - 16736.886: 99.2705% ( 1) 00:19:04.900 17241.009 - 17341.834: 99.2877% ( 3) 00:19:04.900 17341.834 - 17442.658: 99.3451% ( 10) 00:19:04.900 17442.658 - 17543.483: 99.4715% ( 22) 00:19:04.900 17543.483 - 17644.308: 99.5290% ( 10) 00:19:04.900 17644.308 - 17745.132: 99.5519% ( 4) 00:19:04.900 17745.132 - 17845.957: 99.5749% ( 4) 00:19:04.900 17845.957 - 17946.782: 99.5979% ( 4) 00:19:04.900 17946.782 - 18047.606: 99.6209% ( 4) 00:19:04.900 18047.606 - 18148.431: 99.6324% ( 2) 00:19:04.900 20870.695 - 20971.520: 99.6611% ( 5) 00:19:04.900 20971.520 - 21072.345: 99.8047% ( 25) 00:19:04.900 21072.345 - 21173.169: 99.9081% ( 18) 00:19:04.900 21173.169 - 21273.994: 99.9713% ( 11) 00:19:04.900 21273.994 - 21374.818: 99.9943% ( 4) 00:19:04.900 22080.591 - 22181.415: 100.0000% ( 1) 00:19:04.900 00:19:04.900 04:41:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:04.900 00:19:04.900 real 0m2.507s 00:19:04.900 user 0m2.205s 00:19:04.900 sys 0m0.195s 00:19:04.900 04:41:11 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.900 04:41:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:04.900 ************************************ 00:19:04.900 END TEST nvme_perf 00:19:04.900 ************************************ 00:19:04.900 04:41:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:04.900 04:41:11 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:04.900 04:41:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.900 04:41:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.900 ************************************ 00:19:04.900 START TEST nvme_hello_world 00:19:04.900 ************************************ 00:19:04.900 04:41:11 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:04.900 Initializing NVMe Controllers 00:19:04.900 Attached to 0000:00:10.0 00:19:04.900 Namespace ID: 1 size: 6GB 00:19:04.900 Attached to 0000:00:11.0 00:19:04.900 Namespace ID: 1 size: 5GB 00:19:04.900 Attached to 0000:00:13.0 00:19:04.900 Namespace ID: 1 size: 1GB 00:19:04.900 Attached to 0000:00:12.0 00:19:04.900 Namespace ID: 1 size: 4GB 00:19:04.900 Namespace ID: 2 size: 4GB 00:19:04.900 Namespace ID: 3 size: 4GB 00:19:04.900 Initialization complete. 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 INFO: using host memory buffer for IO 00:19:04.900 Hello world! 00:19:04.900 00:19:04.900 real 0m0.228s 00:19:04.900 user 0m0.098s 00:19:04.900 sys 0m0.086s 00:19:04.900 ************************************ 00:19:04.900 END TEST nvme_hello_world 00:19:04.900 ************************************ 00:19:04.900 04:41:11 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.900 04:41:11 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:04.900 04:41:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:04.900 04:41:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.900 04:41:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.900 04:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.900 ************************************ 00:19:04.900 START TEST nvme_sgl 00:19:04.900 ************************************ 00:19:04.900 04:41:12 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:05.159 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:05.159 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:05.159 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:05.159 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:05.159 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:05.159 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:19:05.159 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:19:05.159 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:19:05.159 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:19:05.159 NVMe Readv/Writev Request test 00:19:05.159 Attached to 0000:00:10.0 00:19:05.159 Attached to 0000:00:11.0 00:19:05.159 Attached to 0000:00:13.0 00:19:05.159 Attached to 0000:00:12.0 00:19:05.159 0000:00:10.0: build_io_request_2 test passed 00:19:05.159 0000:00:10.0: build_io_request_4 test passed 00:19:05.159 0000:00:10.0: build_io_request_5 test passed 00:19:05.159 0000:00:10.0: build_io_request_6 test passed 00:19:05.159 0000:00:10.0: build_io_request_7 test passed 00:19:05.159 0000:00:10.0: build_io_request_10 test passed 00:19:05.159 0000:00:11.0: build_io_request_2 test passed 00:19:05.159 0000:00:11.0: build_io_request_4 test passed 00:19:05.159 0000:00:11.0: build_io_request_5 test passed 00:19:05.159 0000:00:11.0: build_io_request_6 test passed 00:19:05.159 0000:00:11.0: build_io_request_7 test passed 00:19:05.159 0000:00:11.0: build_io_request_10 test passed 00:19:05.159 Cleaning up... 00:19:05.159 ************************************ 00:19:05.159 END TEST nvme_sgl 00:19:05.159 ************************************ 00:19:05.159 00:19:05.159 real 0m0.266s 00:19:05.159 user 0m0.138s 00:19:05.159 sys 0m0.078s 00:19:05.159 04:41:12 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.159 04:41:12 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:05.159 04:41:12 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:05.159 04:41:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.159 04:41:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.159 04:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.159 ************************************ 00:19:05.159 START TEST nvme_e2edp 00:19:05.159 ************************************ 00:19:05.159 04:41:12 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:05.417 NVMe Write/Read with End-to-End data protection test 00:19:05.417 Attached to 0000:00:10.0 00:19:05.417 Attached to 0000:00:11.0 00:19:05.417 Attached to 0000:00:13.0 00:19:05.417 Attached to 0000:00:12.0 00:19:05.417 Cleaning up... 00:19:05.417 00:19:05.417 real 0m0.196s 00:19:05.417 user 0m0.065s 00:19:05.417 sys 0m0.088s 00:19:05.417 ************************************ 00:19:05.417 END TEST nvme_e2edp 00:19:05.417 ************************************ 00:19:05.417 04:41:12 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.417 04:41:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:05.417 04:41:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:05.417 04:41:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.417 04:41:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.417 04:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.417 ************************************ 00:19:05.417 START TEST nvme_reserve 00:19:05.417 ************************************ 00:19:05.417 04:41:12 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:05.676 ===================================================== 00:19:05.676 NVMe Controller at PCI bus 0, device 16, function 0 00:19:05.676 ===================================================== 00:19:05.676 Reservations: Not Supported 00:19:05.676 ===================================================== 00:19:05.676 NVMe Controller at PCI bus 0, device 17, function 0 00:19:05.676 ===================================================== 00:19:05.676 Reservations: Not Supported 00:19:05.676 ===================================================== 00:19:05.676 NVMe Controller at PCI bus 0, device 19, function 0 00:19:05.676 ===================================================== 00:19:05.676 Reservations: Not Supported 00:19:05.676 ===================================================== 00:19:05.676 NVMe Controller at PCI bus 0, device 18, function 0 00:19:05.676 ===================================================== 00:19:05.676 Reservations: Not Supported 00:19:05.676 Reservation test passed 00:19:05.676 00:19:05.676 real 0m0.220s 00:19:05.676 user 0m0.075s 00:19:05.676 sys 0m0.102s 00:19:05.676 ************************************ 00:19:05.676 END TEST nvme_reserve 00:19:05.676 ************************************ 00:19:05.676 04:41:12 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.676 04:41:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:05.676 04:41:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:05.676 04:41:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.676 04:41:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.676 04:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.676 ************************************ 00:19:05.676 START TEST nvme_err_injection 00:19:05.676 ************************************ 00:19:05.676 04:41:12 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:05.934 NVMe Error Injection test 00:19:05.934 Attached to 0000:00:10.0 00:19:05.934 Attached to 0000:00:11.0 00:19:05.934 Attached to 0000:00:13.0 00:19:05.934 Attached to 0000:00:12.0 00:19:05.934 0000:00:10.0: get features failed as expected 00:19:05.934 0000:00:11.0: get features failed as expected 00:19:05.934 0000:00:13.0: get features failed as expected 00:19:05.934 0000:00:12.0: get features failed as expected 00:19:05.934 0000:00:10.0: get features successfully as expected 00:19:05.934 0000:00:11.0: get features successfully as expected 00:19:05.934 0000:00:13.0: get features successfully as expected 00:19:05.934 0000:00:12.0: get features successfully as expected 00:19:05.934 0000:00:10.0: read failed as expected 00:19:05.934 0000:00:11.0: read failed as expected 00:19:05.934 0000:00:13.0: read failed as expected 00:19:05.934 0000:00:12.0: read failed as expected 00:19:05.934 0000:00:10.0: read successfully as expected 00:19:05.934 0000:00:11.0: read successfully as expected 00:19:05.934 0000:00:13.0: read successfully as expected 00:19:05.934 0000:00:12.0: read successfully as expected 00:19:05.934 Cleaning up... 00:19:05.934 00:19:05.934 real 0m0.218s 00:19:05.934 user 0m0.079s 00:19:05.934 sys 0m0.098s 00:19:05.934 ************************************ 00:19:05.934 END TEST nvme_err_injection 00:19:05.934 ************************************ 00:19:05.934 04:41:13 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.934 04:41:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:05.934 04:41:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:05.934 04:41:13 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:19:05.934 04:41:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.934 04:41:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.934 ************************************ 00:19:05.934 START TEST nvme_overhead 00:19:05.934 ************************************ 00:19:05.934 04:41:13 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:07.308 Initializing NVMe Controllers 00:19:07.308 Attached to 0000:00:10.0 00:19:07.308 Attached to 0000:00:11.0 00:19:07.308 Attached to 0000:00:13.0 00:19:07.308 Attached to 0000:00:12.0 00:19:07.308 Initialization complete. Launching workers. 00:19:07.308 submit (in ns) avg, min, max = 11652.0, 9873.1, 891933.1 00:19:07.308 complete (in ns) avg, min, max = 7982.4, 7230.8, 88801.5 00:19:07.308 00:19:07.308 Submit histogram 00:19:07.308 ================ 00:19:07.308 Range in us Cumulative Count 00:19:07.308 9.846 - 9.895: 0.0067% ( 1) 00:19:07.308 10.437 - 10.486: 0.0135% ( 1) 00:19:07.308 10.732 - 10.782: 0.0202% ( 1) 00:19:07.308 10.782 - 10.831: 0.1146% ( 14) 00:19:07.308 10.831 - 10.880: 0.3168% ( 30) 00:19:07.308 10.880 - 10.929: 1.1053% ( 117) 00:19:07.308 10.929 - 10.978: 3.2352% ( 316) 00:19:07.308 10.978 - 11.028: 7.3330% ( 608) 00:19:07.308 11.028 - 11.077: 13.8303% ( 964) 00:19:07.308 11.077 - 11.126: 20.8196% ( 1037) 00:19:07.308 11.126 - 11.175: 29.3590% ( 1267) 00:19:07.308 11.175 - 11.225: 39.0038% ( 1431) 00:19:07.308 11.225 - 11.274: 49.7675% ( 1597) 00:19:07.308 11.274 - 11.323: 59.4595% ( 1438) 00:19:07.308 11.323 - 11.372: 67.1093% ( 1135) 00:19:07.308 11.372 - 11.422: 71.5711% ( 662) 00:19:07.308 11.422 - 11.471: 74.5703% ( 445) 00:19:07.308 11.471 - 11.520: 76.8282% ( 335) 00:19:07.308 11.520 - 11.569: 78.7154% ( 280) 00:19:07.308 11.569 - 11.618: 79.9690% ( 186) 00:19:07.308 11.618 - 11.668: 81.1350% ( 173) 00:19:07.308 11.668 - 11.717: 82.2740% ( 169) 00:19:07.308 11.717 - 11.766: 83.3053% ( 153) 00:19:07.308 11.766 - 11.815: 84.3567% ( 156) 00:19:07.308 11.815 - 11.865: 85.4688% ( 165) 00:19:07.308 11.865 - 11.914: 86.6887% ( 181) 00:19:07.308 11.914 - 11.963: 87.7940% ( 164) 00:19:07.308 11.963 - 12.012: 88.7309% ( 139) 00:19:07.308 12.012 - 12.062: 89.5936% ( 128) 00:19:07.308 12.062 - 12.111: 90.4428% ( 126) 00:19:07.308 12.111 - 12.160: 91.3797% ( 139) 00:19:07.308 12.160 - 12.209: 92.1682% ( 117) 00:19:07.308 12.209 - 12.258: 92.8220% ( 97) 00:19:07.308 12.258 - 12.308: 93.4623% ( 95) 00:19:07.308 12.308 - 12.357: 93.8869% ( 63) 00:19:07.308 12.357 - 12.406: 94.3048% ( 62) 00:19:07.308 12.406 - 12.455: 94.5946% ( 43) 00:19:07.308 12.455 - 12.505: 94.7159% ( 18) 00:19:07.308 12.505 - 12.554: 94.8103% ( 14) 00:19:07.308 12.554 - 12.603: 94.9585% ( 22) 00:19:07.308 12.603 - 12.702: 95.0462% ( 13) 00:19:07.308 12.702 - 12.800: 95.1810% ( 20) 00:19:07.308 12.800 - 12.898: 95.2686% ( 13) 00:19:07.308 12.898 - 12.997: 95.3495% ( 12) 00:19:07.308 12.997 - 13.095: 95.4169% ( 10) 00:19:07.308 13.095 - 13.194: 95.4708% ( 8) 00:19:07.308 13.194 - 13.292: 95.5382% ( 10) 00:19:07.308 13.292 - 13.391: 95.6056% ( 10) 00:19:07.308 13.391 - 13.489: 95.6797% ( 11) 00:19:07.308 13.489 - 13.588: 95.7471% ( 10) 00:19:07.308 13.588 - 13.686: 95.8078% ( 9) 00:19:07.308 13.686 - 13.785: 95.9021% ( 14) 00:19:07.308 13.785 - 13.883: 95.9561% ( 8) 00:19:07.308 13.883 - 13.982: 96.0909% ( 20) 00:19:07.308 13.982 - 14.080: 96.1583% ( 10) 00:19:07.308 14.080 - 14.178: 96.2189% ( 9) 00:19:07.308 14.178 - 14.277: 96.3268% ( 16) 00:19:07.308 14.277 - 14.375: 96.3874% ( 9) 00:19:07.308 14.375 - 14.474: 96.4683% ( 12) 00:19:07.308 14.474 - 14.572: 96.5424% ( 11) 00:19:07.308 14.572 - 14.671: 96.6300% ( 13) 00:19:07.308 14.671 - 14.769: 96.8592% ( 34) 00:19:07.308 14.769 - 14.868: 97.0277% ( 25) 00:19:07.308 14.868 - 14.966: 97.1490% ( 18) 00:19:07.308 14.966 - 15.065: 97.2029% ( 8) 00:19:07.308 15.065 - 15.163: 97.2771% ( 11) 00:19:07.308 15.163 - 15.262: 97.4388% ( 24) 00:19:07.308 15.262 - 15.360: 97.5669% ( 19) 00:19:07.308 15.360 - 15.458: 97.6478% ( 12) 00:19:07.308 15.458 - 15.557: 97.7758% ( 19) 00:19:07.308 15.557 - 15.655: 97.8702% ( 14) 00:19:07.308 15.655 - 15.754: 97.9241% ( 8) 00:19:07.308 15.754 - 15.852: 98.0387% ( 17) 00:19:07.308 15.852 - 15.951: 98.1061% ( 10) 00:19:07.308 15.951 - 16.049: 98.1667% ( 9) 00:19:07.308 16.049 - 16.148: 98.2072% ( 6) 00:19:07.308 16.148 - 16.246: 98.2544% ( 7) 00:19:07.308 16.246 - 16.345: 98.2611% ( 1) 00:19:07.308 16.345 - 16.443: 98.3218% ( 9) 00:19:07.308 16.443 - 16.542: 98.3689% ( 7) 00:19:07.308 16.542 - 16.640: 98.4161% ( 7) 00:19:07.308 16.640 - 16.738: 98.4431% ( 4) 00:19:07.308 16.738 - 16.837: 98.4835% ( 6) 00:19:07.308 16.837 - 16.935: 98.5779% ( 14) 00:19:07.308 16.935 - 17.034: 98.6722% ( 14) 00:19:07.308 17.034 - 17.132: 98.7666% ( 14) 00:19:07.308 17.132 - 17.231: 98.8205% ( 8) 00:19:07.308 17.231 - 17.329: 98.8947% ( 11) 00:19:07.308 17.329 - 17.428: 98.9621% ( 10) 00:19:07.308 17.428 - 17.526: 99.0160% ( 8) 00:19:07.308 17.526 - 17.625: 99.0564% ( 6) 00:19:07.308 17.625 - 17.723: 99.0901% ( 5) 00:19:07.308 17.723 - 17.822: 99.1440% ( 8) 00:19:07.308 17.822 - 17.920: 99.2182% ( 11) 00:19:07.308 17.920 - 18.018: 99.2586% ( 6) 00:19:07.308 18.018 - 18.117: 99.3058% ( 7) 00:19:07.308 18.117 - 18.215: 99.3395% ( 5) 00:19:07.308 18.215 - 18.314: 99.3934% ( 8) 00:19:07.308 18.314 - 18.412: 99.4541% ( 9) 00:19:07.308 18.412 - 18.511: 99.4810% ( 4) 00:19:07.308 18.511 - 18.609: 99.4945% ( 2) 00:19:07.308 18.609 - 18.708: 99.5080% ( 2) 00:19:07.308 18.708 - 18.806: 99.5215% ( 2) 00:19:07.308 18.806 - 18.905: 99.5349% ( 2) 00:19:07.308 18.905 - 19.003: 99.5754% ( 6) 00:19:07.308 19.003 - 19.102: 99.5889% ( 2) 00:19:07.308 19.102 - 19.200: 99.6023% ( 2) 00:19:07.309 19.200 - 19.298: 99.6226% ( 3) 00:19:07.309 19.298 - 19.397: 99.6293% ( 1) 00:19:07.309 19.397 - 19.495: 99.6428% ( 2) 00:19:07.309 19.495 - 19.594: 99.6495% ( 1) 00:19:07.309 19.594 - 19.692: 99.6630% ( 2) 00:19:07.309 19.791 - 19.889: 99.6697% ( 1) 00:19:07.309 19.988 - 20.086: 99.6765% ( 1) 00:19:07.309 20.185 - 20.283: 99.6967% ( 3) 00:19:07.309 20.283 - 20.382: 99.7034% ( 1) 00:19:07.309 20.382 - 20.480: 99.7102% ( 1) 00:19:07.309 20.578 - 20.677: 99.7169% ( 1) 00:19:07.309 20.775 - 20.874: 99.7237% ( 1) 00:19:07.309 21.169 - 21.268: 99.7304% ( 1) 00:19:07.309 21.366 - 21.465: 99.7439% ( 2) 00:19:07.309 21.465 - 21.563: 99.7506% ( 1) 00:19:07.309 21.858 - 21.957: 99.7641% ( 2) 00:19:07.309 22.055 - 22.154: 99.7708% ( 1) 00:19:07.309 22.252 - 22.351: 99.7776% ( 1) 00:19:07.309 22.449 - 22.548: 99.7843% ( 1) 00:19:07.309 22.548 - 22.646: 99.7911% ( 1) 00:19:07.309 22.843 - 22.942: 99.7978% ( 1) 00:19:07.309 22.942 - 23.040: 99.8045% ( 1) 00:19:07.309 23.040 - 23.138: 99.8113% ( 1) 00:19:07.309 23.237 - 23.335: 99.8180% ( 1) 00:19:07.309 23.828 - 23.926: 99.8248% ( 1) 00:19:07.309 23.926 - 24.025: 99.8315% ( 1) 00:19:07.309 24.025 - 24.123: 99.8382% ( 1) 00:19:07.309 24.123 - 24.222: 99.8450% ( 1) 00:19:07.309 24.517 - 24.615: 99.8585% ( 2) 00:19:07.309 24.615 - 24.714: 99.8652% ( 1) 00:19:07.309 24.812 - 24.911: 99.8719% ( 1) 00:19:07.309 24.911 - 25.009: 99.8787% ( 1) 00:19:07.309 25.009 - 25.108: 99.8854% ( 1) 00:19:07.309 25.206 - 25.403: 99.9056% ( 3) 00:19:07.309 25.403 - 25.600: 99.9124% ( 1) 00:19:07.309 28.357 - 28.554: 99.9191% ( 1) 00:19:07.309 28.751 - 28.948: 99.9259% ( 1) 00:19:07.309 30.326 - 30.523: 99.9326% ( 1) 00:19:07.309 31.508 - 31.705: 99.9393% ( 1) 00:19:07.309 36.037 - 36.234: 99.9461% ( 1) 00:19:07.309 38.991 - 39.188: 99.9528% ( 1) 00:19:07.309 40.369 - 40.566: 99.9663% ( 2) 00:19:07.309 46.080 - 46.277: 99.9730% ( 1) 00:19:07.309 46.277 - 46.474: 99.9798% ( 1) 00:19:07.309 78.769 - 79.163: 99.9865% ( 1) 00:19:07.309 92.554 - 92.948: 99.9933% ( 1) 00:19:07.309 888.517 - 894.818: 100.0000% ( 1) 00:19:07.309 00:19:07.309 Complete histogram 00:19:07.309 ================== 00:19:07.309 Range in us Cumulative Count 00:19:07.309 7.188 - 7.237: 0.0202% ( 3) 00:19:07.309 7.237 - 7.286: 0.4920% ( 70) 00:19:07.309 7.286 - 7.335: 4.4955% ( 594) 00:19:07.309 7.335 - 7.385: 15.2052% ( 1589) 00:19:07.309 7.385 - 7.434: 26.8653% ( 1730) 00:19:07.309 7.434 - 7.483: 35.1554% ( 1230) 00:19:07.309 7.483 - 7.532: 40.4260% ( 782) 00:19:07.309 7.532 - 7.582: 43.4791% ( 453) 00:19:07.309 7.582 - 7.631: 45.4944% ( 299) 00:19:07.309 7.631 - 7.680: 46.6806% ( 176) 00:19:07.309 7.680 - 7.729: 47.2333% ( 82) 00:19:07.309 7.729 - 7.778: 47.6983% ( 69) 00:19:07.309 7.778 - 7.828: 47.9949% ( 44) 00:19:07.309 7.828 - 7.877: 48.4936% ( 74) 00:19:07.309 7.877 - 7.926: 51.4188% ( 434) 00:19:07.309 7.926 - 7.975: 58.6035% ( 1066) 00:19:07.309 7.975 - 8.025: 66.4555% ( 1165) 00:19:07.309 8.025 - 8.074: 72.6562% ( 920) 00:19:07.309 8.074 - 8.123: 79.0861% ( 954) 00:19:07.309 8.123 - 8.172: 84.3432% ( 780) 00:19:07.309 8.172 - 8.222: 87.9625% ( 537) 00:19:07.309 8.222 - 8.271: 90.3754% ( 358) 00:19:07.309 8.271 - 8.320: 92.1345% ( 261) 00:19:07.309 8.320 - 8.369: 93.4084% ( 189) 00:19:07.309 8.369 - 8.418: 94.1430% ( 109) 00:19:07.309 8.418 - 8.468: 94.6822% ( 80) 00:19:07.309 8.468 - 8.517: 95.0125% ( 49) 00:19:07.309 8.517 - 8.566: 95.2214% ( 31) 00:19:07.309 8.566 - 8.615: 95.4303% ( 31) 00:19:07.309 8.615 - 8.665: 95.5921% ( 24) 00:19:07.309 8.665 - 8.714: 95.7336% ( 21) 00:19:07.309 8.714 - 8.763: 95.8684% ( 20) 00:19:07.309 8.763 - 8.812: 95.9493% ( 12) 00:19:07.309 8.812 - 8.862: 95.9965% ( 7) 00:19:07.309 8.862 - 8.911: 96.0572% ( 9) 00:19:07.309 8.911 - 8.960: 96.1178% ( 9) 00:19:07.309 8.960 - 9.009: 96.1852% ( 10) 00:19:07.309 9.009 - 9.058: 96.2189% ( 5) 00:19:07.309 9.058 - 9.108: 96.2594% ( 6) 00:19:07.309 9.108 - 9.157: 96.3133% ( 8) 00:19:07.309 9.157 - 9.206: 96.3470% ( 5) 00:19:07.309 9.206 - 9.255: 96.3672% ( 3) 00:19:07.309 9.255 - 9.305: 96.4009% ( 5) 00:19:07.309 9.305 - 9.354: 96.4211% ( 3) 00:19:07.309 9.354 - 9.403: 96.4548% ( 5) 00:19:07.309 9.403 - 9.452: 96.4818% ( 4) 00:19:07.309 9.452 - 9.502: 96.4885% ( 1) 00:19:07.309 9.502 - 9.551: 96.5222% ( 5) 00:19:07.309 9.551 - 9.600: 96.5289% ( 1) 00:19:07.309 9.600 - 9.649: 96.5424% ( 2) 00:19:07.309 9.649 - 9.698: 96.5626% ( 3) 00:19:07.309 9.698 - 9.748: 96.5896% ( 4) 00:19:07.309 9.748 - 9.797: 96.6098% ( 3) 00:19:07.309 9.797 - 9.846: 96.6300% ( 3) 00:19:07.309 9.846 - 9.895: 96.6570% ( 4) 00:19:07.309 9.895 - 9.945: 96.6772% ( 3) 00:19:07.309 9.945 - 9.994: 96.7244% ( 7) 00:19:07.309 9.994 - 10.043: 96.7379% ( 2) 00:19:07.309 10.043 - 10.092: 96.7985% ( 9) 00:19:07.309 10.092 - 10.142: 96.8255% ( 4) 00:19:07.309 10.142 - 10.191: 96.8525% ( 4) 00:19:07.309 10.191 - 10.240: 96.9131% ( 9) 00:19:07.309 10.240 - 10.289: 96.9670% ( 8) 00:19:07.309 10.289 - 10.338: 97.0142% ( 7) 00:19:07.309 10.338 - 10.388: 97.0614% ( 7) 00:19:07.309 10.388 - 10.437: 97.1423% ( 12) 00:19:07.309 10.437 - 10.486: 97.1895% ( 7) 00:19:07.309 10.486 - 10.535: 97.2164% ( 4) 00:19:07.309 10.535 - 10.585: 97.2569% ( 6) 00:19:07.309 10.585 - 10.634: 97.2771% ( 3) 00:19:07.309 10.634 - 10.683: 97.3108% ( 5) 00:19:07.309 10.683 - 10.732: 97.3445% ( 5) 00:19:07.309 10.732 - 10.782: 97.3782% ( 5) 00:19:07.309 10.782 - 10.831: 97.4119% ( 5) 00:19:07.309 10.831 - 10.880: 97.4591% ( 7) 00:19:07.309 10.880 - 10.929: 97.5332% ( 11) 00:19:07.309 10.929 - 10.978: 97.6073% ( 11) 00:19:07.309 10.978 - 11.028: 97.6680% ( 9) 00:19:07.309 11.028 - 11.077: 97.6950% ( 4) 00:19:07.309 11.077 - 11.126: 97.7354% ( 6) 00:19:07.309 11.126 - 11.175: 97.7893% ( 8) 00:19:07.309 11.175 - 11.225: 97.8095% ( 3) 00:19:07.309 11.225 - 11.274: 97.8500% ( 6) 00:19:07.309 11.274 - 11.323: 97.8837% ( 5) 00:19:07.309 11.323 - 11.372: 97.9241% ( 6) 00:19:07.309 11.372 - 11.422: 97.9308% ( 1) 00:19:07.309 11.422 - 11.471: 97.9376% ( 1) 00:19:07.309 11.471 - 11.520: 97.9443% ( 1) 00:19:07.309 11.520 - 11.569: 97.9645% ( 3) 00:19:07.309 11.618 - 11.668: 97.9713% ( 1) 00:19:07.309 11.668 - 11.717: 97.9848% ( 2) 00:19:07.309 11.717 - 11.766: 98.0319% ( 7) 00:19:07.309 11.766 - 11.815: 98.0454% ( 2) 00:19:07.309 11.815 - 11.865: 98.0656% ( 3) 00:19:07.309 11.865 - 11.914: 98.0859% ( 3) 00:19:07.309 11.914 - 11.963: 98.1128% ( 4) 00:19:07.309 11.963 - 12.012: 98.1330% ( 3) 00:19:07.309 12.012 - 12.062: 98.1465% ( 2) 00:19:07.309 12.062 - 12.111: 98.1667% ( 3) 00:19:07.309 12.111 - 12.160: 98.2004% ( 5) 00:19:07.309 12.160 - 12.209: 98.2139% ( 2) 00:19:07.309 12.209 - 12.258: 98.2409% ( 4) 00:19:07.309 12.258 - 12.308: 98.2476% ( 1) 00:19:07.309 12.308 - 12.357: 98.2611% ( 2) 00:19:07.309 12.357 - 12.406: 98.2746% ( 2) 00:19:07.309 12.406 - 12.455: 98.2881% ( 2) 00:19:07.309 12.455 - 12.505: 98.3015% ( 2) 00:19:07.309 12.554 - 12.603: 98.3218% ( 3) 00:19:07.309 12.603 - 12.702: 98.3420% ( 3) 00:19:07.309 12.702 - 12.800: 98.3689% ( 4) 00:19:07.309 12.800 - 12.898: 98.3824% ( 2) 00:19:07.309 12.898 - 12.997: 98.4431% ( 9) 00:19:07.309 12.997 - 13.095: 98.4633% ( 3) 00:19:07.309 13.095 - 13.194: 98.4835% ( 3) 00:19:07.309 13.194 - 13.292: 98.5105% ( 4) 00:19:07.309 13.292 - 13.391: 98.5509% ( 6) 00:19:07.309 13.391 - 13.489: 98.6116% ( 9) 00:19:07.309 13.489 - 13.588: 98.6722% ( 9) 00:19:07.309 13.588 - 13.686: 98.7531% ( 12) 00:19:07.309 13.686 - 13.785: 98.8003% ( 7) 00:19:07.309 13.785 - 13.883: 98.8340% ( 5) 00:19:07.309 13.883 - 13.982: 98.8744% ( 6) 00:19:07.309 13.982 - 14.080: 98.9351% ( 9) 00:19:07.309 14.080 - 14.178: 98.9823% ( 7) 00:19:07.309 14.178 - 14.277: 99.0295% ( 7) 00:19:07.309 14.277 - 14.375: 99.0969% ( 10) 00:19:07.309 14.375 - 14.474: 99.1912% ( 14) 00:19:07.309 14.474 - 14.572: 99.2519% ( 9) 00:19:07.309 14.572 - 14.671: 99.2856% ( 5) 00:19:07.309 14.671 - 14.769: 99.3395% ( 8) 00:19:07.309 14.769 - 14.868: 99.3732% ( 5) 00:19:07.309 14.868 - 14.966: 99.4204% ( 7) 00:19:07.309 14.966 - 15.065: 99.4743% ( 8) 00:19:07.309 15.065 - 15.163: 99.5215% ( 7) 00:19:07.309 15.163 - 15.262: 99.5484% ( 4) 00:19:07.309 15.262 - 15.360: 99.6023% ( 8) 00:19:07.309 15.360 - 15.458: 99.6293% ( 4) 00:19:07.310 15.458 - 15.557: 99.6495% ( 3) 00:19:07.310 15.557 - 15.655: 99.6563% ( 1) 00:19:07.310 15.655 - 15.754: 99.6630% ( 1) 00:19:07.310 15.754 - 15.852: 99.6697% ( 1) 00:19:07.310 15.852 - 15.951: 99.6832% ( 2) 00:19:07.310 15.951 - 16.049: 99.6900% ( 1) 00:19:07.310 16.049 - 16.148: 99.6967% ( 1) 00:19:07.310 16.246 - 16.345: 99.7034% ( 1) 00:19:07.310 16.345 - 16.443: 99.7102% ( 1) 00:19:07.310 16.443 - 16.542: 99.7169% ( 1) 00:19:07.310 16.640 - 16.738: 99.7237% ( 1) 00:19:07.310 16.935 - 17.034: 99.7371% ( 2) 00:19:07.310 17.231 - 17.329: 99.7439% ( 1) 00:19:07.310 17.723 - 17.822: 99.7641% ( 3) 00:19:07.310 17.920 - 18.018: 99.7708% ( 1) 00:19:07.310 18.314 - 18.412: 99.7843% ( 2) 00:19:07.310 19.102 - 19.200: 99.7978% ( 2) 00:19:07.310 19.495 - 19.594: 99.8045% ( 1) 00:19:07.310 19.594 - 19.692: 99.8113% ( 1) 00:19:07.310 19.988 - 20.086: 99.8180% ( 1) 00:19:07.310 20.086 - 20.185: 99.8248% ( 1) 00:19:07.310 20.185 - 20.283: 99.8315% ( 1) 00:19:07.310 20.283 - 20.382: 99.8450% ( 2) 00:19:07.310 20.382 - 20.480: 99.8585% ( 2) 00:19:07.310 20.480 - 20.578: 99.8652% ( 1) 00:19:07.310 20.874 - 20.972: 99.8719% ( 1) 00:19:07.310 21.268 - 21.366: 99.8787% ( 1) 00:19:07.310 21.662 - 21.760: 99.8854% ( 1) 00:19:07.310 21.858 - 21.957: 99.8922% ( 1) 00:19:07.310 22.055 - 22.154: 99.8989% ( 1) 00:19:07.310 22.252 - 22.351: 99.9056% ( 1) 00:19:07.310 22.548 - 22.646: 99.9191% ( 2) 00:19:07.310 22.942 - 23.040: 99.9259% ( 1) 00:19:07.310 26.585 - 26.782: 99.9326% ( 1) 00:19:07.310 28.160 - 28.357: 99.9393% ( 1) 00:19:07.310 33.871 - 34.068: 99.9461% ( 1) 00:19:07.310 35.446 - 35.643: 99.9528% ( 1) 00:19:07.310 48.246 - 48.443: 99.9596% ( 1) 00:19:07.310 64.985 - 65.378: 99.9663% ( 1) 00:19:07.310 66.954 - 67.348: 99.9730% ( 1) 00:19:07.310 73.255 - 73.649: 99.9865% ( 2) 00:19:07.310 79.163 - 79.557: 99.9933% ( 1) 00:19:07.310 88.615 - 89.009: 100.0000% ( 1) 00:19:07.310 00:19:07.310 ************************************ 00:19:07.310 END TEST nvme_overhead 00:19:07.310 ************************************ 00:19:07.310 00:19:07.310 real 0m1.215s 00:19:07.310 user 0m1.071s 00:19:07.310 sys 0m0.091s 00:19:07.310 04:41:14 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.310 04:41:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:07.310 04:41:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:07.310 04:41:14 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:19:07.310 04:41:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.310 04:41:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.310 ************************************ 00:19:07.310 START TEST nvme_arbitration 00:19:07.310 ************************************ 00:19:07.310 04:41:14 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:10.588 Initializing NVMe Controllers 00:19:10.588 Attached to 0000:00:10.0 00:19:10.588 Attached to 0000:00:11.0 00:19:10.588 Attached to 0000:00:13.0 00:19:10.588 Attached to 0000:00:12.0 00:19:10.588 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:10.588 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:19:10.588 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:19:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:19:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:19:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:19:10.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:10.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:10.588 Initialization complete. Launching workers. 00:19:10.588 Starting thread on core 1 with urgent priority queue 00:19:10.588 Starting thread on core 2 with urgent priority queue 00:19:10.588 Starting thread on core 3 with urgent priority queue 00:19:10.588 Starting thread on core 0 with urgent priority queue 00:19:10.588 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:19:10.588 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:19:10.588 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:19:10.588 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:19:10.588 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:19:10.588 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:19:10.588 ======================================================== 00:19:10.588 00:19:10.588 00:19:10.588 real 0m3.325s 00:19:10.588 user 0m9.262s 00:19:10.588 sys 0m0.107s 00:19:10.588 ************************************ 00:19:10.588 END TEST nvme_arbitration 00:19:10.588 ************************************ 00:19:10.588 04:41:17 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.588 04:41:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:10.588 04:41:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:10.588 04:41:17 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:10.588 04:41:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.588 04:41:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.588 ************************************ 00:19:10.588 START TEST nvme_single_aen 00:19:10.588 ************************************ 00:19:10.588 04:41:17 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:10.904 Asynchronous Event Request test 00:19:10.904 Attached to 0000:00:10.0 00:19:10.904 Attached to 0000:00:11.0 00:19:10.904 Attached to 0000:00:13.0 00:19:10.904 Attached to 0000:00:12.0 00:19:10.904 Reset controller to setup AER completions for this process 00:19:10.904 Registering asynchronous event callbacks... 00:19:10.904 Getting orig temperature thresholds of all controllers 00:19:10.904 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:10.904 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:10.904 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:10.904 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:10.904 Setting all controllers temperature threshold low to trigger AER 00:19:10.904 Waiting for all controllers temperature threshold to be set lower 00:19:10.904 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:10.904 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:10.904 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:10.905 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:10.905 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:10.905 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:10.905 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:10.905 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:10.905 Waiting for all controllers to trigger AER and reset threshold 00:19:10.905 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:10.905 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:10.905 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:10.905 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:10.905 Cleaning up... 00:19:10.905 ************************************ 00:19:10.905 END TEST nvme_single_aen 00:19:10.905 ************************************ 00:19:10.905 00:19:10.905 real 0m0.217s 00:19:10.905 user 0m0.076s 00:19:10.905 sys 0m0.097s 00:19:10.905 04:41:17 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.905 04:41:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:10.905 04:41:17 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:10.905 04:41:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.905 04:41:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.905 04:41:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.905 ************************************ 00:19:10.905 START TEST nvme_doorbell_aers 00:19:10.905 ************************************ 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:10.905 04:41:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:10.905 04:41:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:19:10.905 04:41:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:10.905 04:41:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:10.905 04:41:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:11.162 [2024-11-27 04:41:18.227251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:21.136 Executing: test_write_invalid_db 00:19:21.136 Waiting for AER completion... 00:19:21.136 Failure: test_write_invalid_db 00:19:21.136 00:19:21.136 Executing: test_invalid_db_write_overflow_sq 00:19:21.136 Waiting for AER completion... 00:19:21.136 Failure: test_invalid_db_write_overflow_sq 00:19:21.136 00:19:21.136 Executing: test_invalid_db_write_overflow_cq 00:19:21.136 Waiting for AER completion... 00:19:21.136 Failure: test_invalid_db_write_overflow_cq 00:19:21.136 00:19:21.136 04:41:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:21.136 04:41:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:19:21.136 [2024-11-27 04:41:28.269687] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:31.106 Executing: test_write_invalid_db 00:19:31.106 Waiting for AER completion... 00:19:31.106 Failure: test_write_invalid_db 00:19:31.106 00:19:31.106 Executing: test_invalid_db_write_overflow_sq 00:19:31.106 Waiting for AER completion... 00:19:31.106 Failure: test_invalid_db_write_overflow_sq 00:19:31.106 00:19:31.106 Executing: test_invalid_db_write_overflow_cq 00:19:31.106 Waiting for AER completion... 00:19:31.106 Failure: test_invalid_db_write_overflow_cq 00:19:31.106 00:19:31.106 04:41:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:31.107 04:41:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:19:31.107 [2024-11-27 04:41:38.306537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:41.072 Executing: test_write_invalid_db 00:19:41.072 Waiting for AER completion... 00:19:41.072 Failure: test_write_invalid_db 00:19:41.072 00:19:41.072 Executing: test_invalid_db_write_overflow_sq 00:19:41.072 Waiting for AER completion... 00:19:41.072 Failure: test_invalid_db_write_overflow_sq 00:19:41.072 00:19:41.072 Executing: test_invalid_db_write_overflow_cq 00:19:41.072 Waiting for AER completion... 00:19:41.072 Failure: test_invalid_db_write_overflow_cq 00:19:41.072 00:19:41.072 04:41:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:41.073 04:41:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:19:41.331 [2024-11-27 04:41:48.321841] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 Executing: test_write_invalid_db 00:19:51.299 Waiting for AER completion... 00:19:51.299 Failure: test_write_invalid_db 00:19:51.299 00:19:51.299 Executing: test_invalid_db_write_overflow_sq 00:19:51.299 Waiting for AER completion... 00:19:51.299 Failure: test_invalid_db_write_overflow_sq 00:19:51.299 00:19:51.299 Executing: test_invalid_db_write_overflow_cq 00:19:51.299 Waiting for AER completion... 00:19:51.299 Failure: test_invalid_db_write_overflow_cq 00:19:51.299 00:19:51.299 00:19:51.299 real 0m40.192s 00:19:51.299 user 0m34.086s 00:19:51.299 sys 0m5.731s 00:19:51.299 04:41:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.299 04:41:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:51.299 ************************************ 00:19:51.299 END TEST nvme_doorbell_aers 00:19:51.299 ************************************ 00:19:51.299 04:41:58 nvme -- nvme/nvme.sh@97 -- # uname 00:19:51.299 04:41:58 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:19:51.299 04:41:58 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:51.299 04:41:58 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:19:51.299 04:41:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.299 04:41:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.299 ************************************ 00:19:51.299 START TEST nvme_multi_aen 00:19:51.299 ************************************ 00:19:51.299 04:41:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:51.299 [2024-11-27 04:41:58.366541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.366749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.366762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.367851] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.367880] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.367888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.368872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.368899] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.368907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.369702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.369723] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 [2024-11-27 04:41:58.369730] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63295) is not found. Dropping the request. 00:19:51.299 Child process pid: 63821 00:19:51.557 [Child] Asynchronous Event Request test 00:19:51.557 [Child] Attached to 0000:00:10.0 00:19:51.557 [Child] Attached to 0000:00:11.0 00:19:51.557 [Child] Attached to 0000:00:13.0 00:19:51.557 [Child] Attached to 0000:00:12.0 00:19:51.557 [Child] Registering asynchronous event callbacks... 00:19:51.557 [Child] Getting orig temperature thresholds of all controllers 00:19:51.557 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 [Child] Waiting for all controllers to trigger AER and reset threshold 00:19:51.557 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 [Child] Cleaning up... 00:19:51.557 Asynchronous Event Request test 00:19:51.557 Attached to 0000:00:10.0 00:19:51.557 Attached to 0000:00:11.0 00:19:51.557 Attached to 0000:00:13.0 00:19:51.557 Attached to 0000:00:12.0 00:19:51.557 Reset controller to setup AER completions for this process 00:19:51.557 Registering asynchronous event callbacks... 00:19:51.557 Getting orig temperature thresholds of all controllers 00:19:51.557 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:51.557 Setting all controllers temperature threshold low to trigger AER 00:19:51.557 Waiting for all controllers temperature threshold to be set lower 00:19:51.557 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:51.557 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:51.557 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:51.557 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:51.557 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:51.557 Waiting for all controllers to trigger AER and reset threshold 00:19:51.557 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:51.557 Cleaning up... 00:19:51.557 00:19:51.557 real 0m0.461s 00:19:51.557 user 0m0.150s 00:19:51.557 sys 0m0.201s 00:19:51.557 04:41:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.557 04:41:58 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:19:51.557 ************************************ 00:19:51.557 END TEST nvme_multi_aen 00:19:51.557 ************************************ 00:19:51.557 04:41:58 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:51.557 04:41:58 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:51.557 04:41:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.557 04:41:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.557 ************************************ 00:19:51.557 START TEST nvme_startup 00:19:51.557 ************************************ 00:19:51.557 04:41:58 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:51.815 Initializing NVMe Controllers 00:19:51.815 Attached to 0000:00:10.0 00:19:51.815 Attached to 0000:00:11.0 00:19:51.815 Attached to 0000:00:13.0 00:19:51.815 Attached to 0000:00:12.0 00:19:51.815 Initialization complete. 00:19:51.815 Time used:157248.266 (us). 00:19:51.815 00:19:51.815 real 0m0.217s 00:19:51.815 user 0m0.073s 00:19:51.815 sys 0m0.095s 00:19:51.815 04:41:58 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.815 04:41:58 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:19:51.815 ************************************ 00:19:51.815 END TEST nvme_startup 00:19:51.815 ************************************ 00:19:51.815 04:41:58 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:19:51.815 04:41:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:51.815 04:41:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.816 04:41:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.816 ************************************ 00:19:51.816 START TEST nvme_multi_secondary 00:19:51.816 ************************************ 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63866 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63867 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:19:51.816 04:41:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:19:55.119 Initializing NVMe Controllers 00:19:55.119 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:55.119 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:55.119 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:55.119 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:55.119 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:19:55.119 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:19:55.119 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:19:55.119 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:19:55.119 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:19:55.119 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:19:55.119 Initialization complete. Launching workers. 00:19:55.119 ======================================================== 00:19:55.119 Latency(us) 00:19:55.119 Device Information : IOPS MiB/s Average min max 00:19:55.119 PCIE (0000:00:10.0) NSID 1 from core 1: 8173.40 31.93 1956.24 696.61 6503.88 00:19:55.119 PCIE (0000:00:11.0) NSID 1 from core 1: 8173.40 31.93 1957.16 704.03 6299.35 00:19:55.119 PCIE (0000:00:13.0) NSID 1 from core 1: 8173.40 31.93 1957.13 724.05 6246.11 00:19:55.119 PCIE (0000:00:12.0) NSID 1 from core 1: 8173.40 31.93 1957.10 717.76 6278.71 00:19:55.119 PCIE (0000:00:12.0) NSID 2 from core 1: 8173.40 31.93 1957.08 712.17 6610.89 00:19:55.119 PCIE (0000:00:12.0) NSID 3 from core 1: 8173.40 31.93 1957.07 719.48 6510.61 00:19:55.119 ======================================================== 00:19:55.119 Total : 49040.42 191.56 1956.96 696.61 6610.89 00:19:55.119 00:19:55.376 Initializing NVMe Controllers 00:19:55.376 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:55.376 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:55.376 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:55.376 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:55.376 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:19:55.376 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:19:55.376 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:19:55.376 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:19:55.376 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:19:55.376 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:19:55.376 Initialization complete. Launching workers. 00:19:55.376 ======================================================== 00:19:55.376 Latency(us) 00:19:55.376 Device Information : IOPS MiB/s Average min max 00:19:55.376 PCIE (0000:00:10.0) NSID 1 from core 2: 3295.24 12.87 4853.14 1068.79 14385.61 00:19:55.376 PCIE (0000:00:11.0) NSID 1 from core 2: 3295.24 12.87 4855.47 1009.95 13641.26 00:19:55.376 PCIE (0000:00:13.0) NSID 1 from core 2: 3295.24 12.87 4855.56 1013.25 15228.78 00:19:55.376 PCIE (0000:00:12.0) NSID 1 from core 2: 3295.24 12.87 4862.52 1011.52 15032.03 00:19:55.376 PCIE (0000:00:12.0) NSID 2 from core 2: 3295.24 12.87 4862.39 1016.84 14884.06 00:19:55.376 PCIE (0000:00:12.0) NSID 3 from core 2: 3295.24 12.87 4861.11 1019.20 14903.83 00:19:55.376 ======================================================== 00:19:55.376 Total : 19771.47 77.23 4858.36 1009.95 15228.78 00:19:55.376 00:19:55.376 04:42:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63866 00:19:57.272 Initializing NVMe Controllers 00:19:57.272 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:57.272 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:57.272 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:57.272 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:57.272 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:57.272 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:57.272 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:57.272 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:57.272 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:57.272 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:57.272 Initialization complete. Launching workers. 00:19:57.272 ======================================================== 00:19:57.273 Latency(us) 00:19:57.273 Device Information : IOPS MiB/s Average min max 00:19:57.273 PCIE (0000:00:10.0) NSID 1 from core 0: 10964.90 42.83 1457.96 694.74 5687.17 00:19:57.273 PCIE (0000:00:11.0) NSID 1 from core 0: 10964.90 42.83 1458.81 710.39 5584.74 00:19:57.273 PCIE (0000:00:13.0) NSID 1 from core 0: 10964.90 42.83 1458.79 629.18 5531.43 00:19:57.273 PCIE (0000:00:12.0) NSID 1 from core 0: 10964.90 42.83 1458.77 609.67 5307.83 00:19:57.273 PCIE (0000:00:12.0) NSID 2 from core 0: 10964.90 42.83 1458.75 572.51 5534.20 00:19:57.273 PCIE (0000:00:12.0) NSID 3 from core 0: 10964.90 42.83 1458.74 567.19 5576.15 00:19:57.273 ======================================================== 00:19:57.273 Total : 65789.43 256.99 1458.64 567.19 5687.17 00:19:57.273 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63867 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63936 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63937 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:19:57.273 04:42:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:20:00.558 Initializing NVMe Controllers 00:20:00.558 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:00.558 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:00.558 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:00.558 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:00.558 Initialization complete. Launching workers. 00:20:00.558 ======================================================== 00:20:00.558 Latency(us) 00:20:00.558 Device Information : IOPS MiB/s Average min max 00:20:00.558 PCIE (0000:00:10.0) NSID 1 from core 0: 8095.99 31.62 1974.92 714.51 6228.29 00:20:00.558 PCIE (0000:00:11.0) NSID 1 from core 0: 8095.99 31.62 1975.94 730.56 6890.78 00:20:00.558 PCIE (0000:00:13.0) NSID 1 from core 0: 8095.99 31.62 1975.97 736.00 6543.40 00:20:00.558 PCIE (0000:00:12.0) NSID 1 from core 0: 8095.99 31.62 1976.01 732.88 6319.14 00:20:00.558 PCIE (0000:00:12.0) NSID 2 from core 0: 8095.99 31.62 1976.05 730.30 6373.28 00:20:00.558 PCIE (0000:00:12.0) NSID 3 from core 0: 8095.99 31.62 1976.10 720.21 6263.93 00:20:00.558 ======================================================== 00:20:00.558 Total : 48575.92 189.75 1975.83 714.51 6890.78 00:20:00.558 00:20:00.558 Initializing NVMe Controllers 00:20:00.558 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:00.558 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:00.558 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:20:00.558 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:20:00.558 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:20:00.558 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:20:00.558 Initialization complete. Launching workers. 00:20:00.558 ======================================================== 00:20:00.558 Latency(us) 00:20:00.558 Device Information : IOPS MiB/s Average min max 00:20:00.558 PCIE (0000:00:10.0) NSID 1 from core 1: 8234.54 32.17 1941.65 712.54 5852.82 00:20:00.558 PCIE (0000:00:11.0) NSID 1 from core 1: 8234.54 32.17 1942.55 734.68 5265.62 00:20:00.558 PCIE (0000:00:13.0) NSID 1 from core 1: 8234.54 32.17 1942.47 646.22 5401.59 00:20:00.558 PCIE (0000:00:12.0) NSID 1 from core 1: 8234.54 32.17 1942.43 624.53 5267.20 00:20:00.558 PCIE (0000:00:12.0) NSID 2 from core 1: 8234.54 32.17 1942.43 674.26 5767.19 00:20:00.558 PCIE (0000:00:12.0) NSID 3 from core 1: 8234.54 32.17 1942.39 676.69 6139.34 00:20:00.558 ======================================================== 00:20:00.558 Total : 49407.26 193.00 1942.32 624.53 6139.34 00:20:00.558 00:20:02.481 Initializing NVMe Controllers 00:20:02.481 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:02.481 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:02.481 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:02.481 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:02.481 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:20:02.481 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:20:02.481 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:20:02.481 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:20:02.481 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:20:02.481 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:20:02.481 Initialization complete. Launching workers. 00:20:02.481 ======================================================== 00:20:02.481 Latency(us) 00:20:02.481 Device Information : IOPS MiB/s Average min max 00:20:02.482 PCIE (0000:00:10.0) NSID 1 from core 2: 4632.98 18.10 3451.51 716.60 12612.18 00:20:02.482 PCIE (0000:00:11.0) NSID 1 from core 2: 4632.98 18.10 3450.03 735.54 12245.12 00:20:02.482 PCIE (0000:00:13.0) NSID 1 from core 2: 4632.98 18.10 3449.63 716.05 12441.09 00:20:02.482 PCIE (0000:00:12.0) NSID 1 from core 2: 4632.98 18.10 3449.07 687.12 12032.21 00:20:02.482 PCIE (0000:00:12.0) NSID 2 from core 2: 4632.98 18.10 3449.02 644.83 11989.90 00:20:02.482 PCIE (0000:00:12.0) NSID 3 from core 2: 4632.98 18.10 3449.15 603.69 12505.04 00:20:02.482 ======================================================== 00:20:02.482 Total : 27797.90 108.59 3449.74 603.69 12612.18 00:20:02.482 00:20:02.482 ************************************ 00:20:02.482 END TEST nvme_multi_secondary 00:20:02.482 ************************************ 00:20:02.482 04:42:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63936 00:20:02.482 04:42:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63937 00:20:02.482 00:20:02.482 real 0m10.682s 00:20:02.482 user 0m18.389s 00:20:02.482 sys 0m0.638s 00:20:02.482 04:42:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.482 04:42:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:20:02.482 04:42:09 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:20:02.482 04:42:09 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:20:02.482 04:42:09 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62904 ]] 00:20:02.482 04:42:09 nvme -- common/autotest_common.sh@1094 -- # kill 62904 00:20:02.482 04:42:09 nvme -- common/autotest_common.sh@1095 -- # wait 62904 00:20:02.482 [2024-11-27 04:42:09.657580] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.658005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.658048] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.658095] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.661058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.661137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.661157] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.661178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.662808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.662845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.662855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.662865] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.664391] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.664427] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.664437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.482 [2024-11-27 04:42:09.664449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63820) is not found. Dropping the request. 00:20:02.740 04:42:09 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:20:02.740 04:42:09 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:20:02.740 04:42:09 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:02.740 04:42:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:02.740 04:42:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.740 04:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:02.740 ************************************ 00:20:02.740 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:02.740 ************************************ 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:02.740 * Looking for test storage... 00:20:02.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:20:02.740 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.741 --rc genhtml_branch_coverage=1 00:20:02.741 --rc genhtml_function_coverage=1 00:20:02.741 --rc genhtml_legend=1 00:20:02.741 --rc geninfo_all_blocks=1 00:20:02.741 --rc geninfo_unexecuted_blocks=1 00:20:02.741 00:20:02.741 ' 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.741 --rc genhtml_branch_coverage=1 00:20:02.741 --rc genhtml_function_coverage=1 00:20:02.741 --rc genhtml_legend=1 00:20:02.741 --rc geninfo_all_blocks=1 00:20:02.741 --rc geninfo_unexecuted_blocks=1 00:20:02.741 00:20:02.741 ' 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.741 --rc genhtml_branch_coverage=1 00:20:02.741 --rc genhtml_function_coverage=1 00:20:02.741 --rc genhtml_legend=1 00:20:02.741 --rc geninfo_all_blocks=1 00:20:02.741 --rc geninfo_unexecuted_blocks=1 00:20:02.741 00:20:02.741 ' 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.741 --rc genhtml_branch_coverage=1 00:20:02.741 --rc genhtml_function_coverage=1 00:20:02.741 --rc genhtml_legend=1 00:20:02.741 --rc geninfo_all_blocks=1 00:20:02.741 --rc geninfo_unexecuted_blocks=1 00:20:02.741 00:20:02.741 ' 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:02.741 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:02.998 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:02.998 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:02.998 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:20:02.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.998 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64103 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64103 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64103 ']' 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.999 04:42:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:02.999 [2024-11-27 04:42:10.038634] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:20:02.999 [2024-11-27 04:42:10.038737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64103 ] 00:20:03.256 [2024-11-27 04:42:10.201681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.256 [2024-11-27 04:42:10.304303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.256 [2024-11-27 04:42:10.304685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.256 [2024-11-27 04:42:10.304738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.256 [2024-11-27 04:42:10.304768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.821 04:42:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.822 04:42:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:20:03.822 04:42:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:03.822 04:42:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.822 04:42:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:03.822 nvme0n1 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jalhV.txt 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.822 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.080 true 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732682531 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64126 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:04.080 04:42:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:06.029 [2024-11-27 04:42:13.036957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:20:06.029 [2024-11-27 04:42:13.037240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:06.029 [2024-11-27 04:42:13.037265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:06.029 [2024-11-27 04:42:13.037278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.029 [2024-11-27 04:42:13.038920] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:20:06.029 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64126 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64126 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64126 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jalhV.txt 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:06.029 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jalhV.txt 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64103 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64103 ']' 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64103 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64103 00:20:06.030 killing process with pid 64103 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64103' 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64103 00:20:06.030 04:42:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64103 00:20:07.928 04:42:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:07.928 04:42:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:07.928 00:20:07.928 real 0m4.861s 00:20:07.928 user 0m17.521s 00:20:07.928 sys 0m0.502s 00:20:07.928 ************************************ 00:20:07.928 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:07.928 ************************************ 00:20:07.928 04:42:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.928 04:42:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.928 04:42:14 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:07.928 04:42:14 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:07.928 04:42:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.928 04:42:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.928 04:42:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:07.928 ************************************ 00:20:07.928 START TEST nvme_fio 00:20:07.928 ************************************ 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:07.928 04:42:14 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:07.928 04:42:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:08.187 04:42:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:08.187 04:42:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.187 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.187 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:08.188 04:42:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.445 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:08.445 fio-3.35 00:20:08.445 Starting 1 thread 00:20:12.627 00:20:12.627 test: (groupid=0, jobs=1): err= 0: pid=64268: Wed Nov 27 04:42:19 2024 00:20:12.627 read: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(137MiB/2014msec) 00:20:12.627 slat (nsec): min=3410, max=51038, avg=5275.61, stdev=2590.17 00:20:12.627 clat (usec): min=830, max=14972, avg=2908.08, stdev=1084.04 00:20:12.627 lat (usec): min=835, max=14975, avg=2913.36, stdev=1085.00 00:20:12.627 clat percentiles (usec): 00:20:12.627 | 1.00th=[ 1287], 5.00th=[ 1778], 10.00th=[ 2245], 20.00th=[ 2409], 00:20:12.627 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:20:12.627 | 70.00th=[ 2769], 80.00th=[ 3359], 90.00th=[ 4424], 95.00th=[ 5276], 00:20:12.627 | 99.00th=[ 6325], 99.50th=[ 7046], 99.90th=[ 9765], 99.95th=[14091], 00:20:12.627 | 99.99th=[14877] 00:20:12.627 bw ( KiB/s): min=38448, max=93256, per=100.00%, avg=69914.00, stdev=26412.36, samples=4 00:20:12.627 iops : min= 9612, max=23314, avg=17478.50, stdev=6603.09, samples=4 00:20:12.627 write: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(137MiB/2014msec); 0 zone resets 00:20:12.627 slat (nsec): min=3526, max=62269, avg=5530.86, stdev=2587.08 00:20:12.627 clat (usec): min=814, max=37052, avg=4429.18, stdev=4521.41 00:20:12.627 lat (usec): min=818, max=37057, avg=4434.71, stdev=4521.71 00:20:12.627 clat percentiles (usec): 00:20:12.627 | 1.00th=[ 1500], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2442], 00:20:12.627 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:20:12.627 | 70.00th=[ 3064], 80.00th=[ 4490], 90.00th=[11863], 95.00th=[15795], 00:20:12.627 | 99.00th=[21890], 99.50th=[23462], 99.90th=[31589], 99.95th=[34341], 00:20:12.627 | 99.99th=[36439] 00:20:12.627 bw ( KiB/s): min=38576, max=93024, per=100.00%, avg=69854.00, stdev=25855.38, samples=4 00:20:12.627 iops : min= 9644, max=23256, avg=17463.50, stdev=6463.85, samples=4 00:20:12.627 lat (usec) : 1000=0.07% 00:20:12.627 lat (msec) : 2=5.23%, 4=76.17%, 10=12.94%, 20=4.69%, 50=0.91% 00:20:12.627 cpu : usr=99.25%, sys=0.05%, ctx=4, majf=0, minf=606 00:20:12.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:12.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.627 issued rwts: total=34991,35020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.627 00:20:12.627 Run status group 0 (all jobs): 00:20:12.627 READ: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=137MiB (143MB), run=2014-2014msec 00:20:12.627 WRITE: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=137MiB (143MB), run=2014-2014msec 00:20:12.627 ----------------------------------------------------- 00:20:12.627 Suppressions used: 00:20:12.627 count bytes template 00:20:12.627 1 32 /usr/src/fio/parse.c 00:20:12.627 1 8 libtcmalloc_minimal.so 00:20:12.627 ----------------------------------------------------- 00:20:12.627 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:12.627 04:42:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:12.627 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:12.885 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:12.885 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:12.885 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:12.885 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:12.885 04:42:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:12.885 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:12.885 fio-3.35 00:20:12.885 Starting 1 thread 00:20:25.106 00:20:25.106 test: (groupid=0, jobs=1): err= 0: pid=64324: Wed Nov 27 04:42:31 2024 00:20:25.106 read: IOPS=23.0k, BW=90.0MiB/s (94.4MB/s)(180MiB/2001msec) 00:20:25.106 slat (usec): min=3, max=101, avg= 5.00, stdev= 2.35 00:20:25.106 clat (usec): min=278, max=6479, avg=2775.63, stdev=566.51 00:20:25.106 lat (usec): min=282, max=6504, avg=2780.63, stdev=567.52 00:20:25.106 clat percentiles (usec): 00:20:25.106 | 1.00th=[ 2057], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:20:25.106 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2671], 00:20:25.106 | 70.00th=[ 2769], 80.00th=[ 2868], 90.00th=[ 3130], 95.00th=[ 3884], 00:20:25.106 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6259], 00:20:25.106 | 99.99th=[ 6325] 00:20:25.106 bw ( KiB/s): min=87968, max=94496, per=99.33%, avg=91533.33, stdev=3305.47, samples=3 00:20:25.106 iops : min=21992, max=23624, avg=22883.33, stdev=826.37, samples=3 00:20:25.106 write: IOPS=22.9k, BW=89.5MiB/s (93.8MB/s)(179MiB/2001msec); 0 zone resets 00:20:25.106 slat (nsec): min=3464, max=99336, avg=5251.34, stdev=2166.44 00:20:25.106 clat (usec): min=219, max=6427, avg=2778.34, stdev=568.85 00:20:25.106 lat (usec): min=223, max=6433, avg=2783.59, stdev=569.80 00:20:25.106 clat percentiles (usec): 00:20:25.106 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:20:25.106 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2671], 00:20:25.106 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3163], 95.00th=[ 3884], 00:20:25.106 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6259], 00:20:25.106 | 99.99th=[ 6325] 00:20:25.106 bw ( KiB/s): min=87472, max=93856, per=100.00%, avg=91664.00, stdev=3631.65, samples=3 00:20:25.106 iops : min=21868, max=23464, avg=22916.00, stdev=907.91, samples=3 00:20:25.106 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:20:25.106 lat (msec) : 2=0.76%, 4=94.56%, 10=4.63% 00:20:25.106 cpu : usr=99.15%, sys=0.10%, ctx=4, majf=0, minf=606 00:20:25.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:25.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.106 issued rwts: total=46097,45830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.106 00:20:25.106 Run status group 0 (all jobs): 00:20:25.106 READ: bw=90.0MiB/s (94.4MB/s), 90.0MiB/s-90.0MiB/s (94.4MB/s-94.4MB/s), io=180MiB (189MB), run=2001-2001msec 00:20:25.106 WRITE: bw=89.5MiB/s (93.8MB/s), 89.5MiB/s-89.5MiB/s (93.8MB/s-93.8MB/s), io=179MiB (188MB), run=2001-2001msec 00:20:25.106 ----------------------------------------------------- 00:20:25.106 Suppressions used: 00:20:25.106 count bytes template 00:20:25.106 1 32 /usr/src/fio/parse.c 00:20:25.106 1 8 libtcmalloc_minimal.so 00:20:25.106 ----------------------------------------------------- 00:20:25.106 00:20:25.106 04:42:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:25.106 04:42:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:25.106 04:42:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:25.106 04:42:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:25.106 04:42:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:25.106 04:42:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:25.363 04:42:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:25.363 04:42:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:25.363 04:42:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:25.363 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:25.363 fio-3.35 00:20:25.363 Starting 1 thread 00:20:29.537 00:20:29.537 test: (groupid=0, jobs=1): err= 0: pid=64385: Wed Nov 27 04:42:36 2024 00:20:29.537 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(126MiB/2021msec) 00:20:29.537 slat (usec): min=3, max=142, avg= 5.03, stdev= 2.62 00:20:29.537 clat (usec): min=867, max=64522, avg=3048.81, stdev=2299.76 00:20:29.537 lat (usec): min=871, max=64526, avg=3053.84, stdev=2299.97 00:20:29.537 clat percentiles (usec): 00:20:29.537 | 1.00th=[ 1401], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2442], 00:20:29.537 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:20:29.537 | 70.00th=[ 2671], 80.00th=[ 3228], 90.00th=[ 4490], 95.00th=[ 5604], 00:20:29.537 | 99.00th=[ 7635], 99.50th=[ 8586], 99.90th=[23725], 99.95th=[63701], 00:20:29.537 | 99.99th=[64226] 00:20:29.537 bw ( KiB/s): min=29864, max=96568, per=100.00%, avg=64454.00, stdev=36146.97, samples=4 00:20:29.537 iops : min= 7466, max=24142, avg=16113.50, stdev=9036.74, samples=4 00:20:29.537 write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(126MiB/2021msec); 0 zone resets 00:20:29.537 slat (nsec): min=3471, max=57767, avg=5314.58, stdev=2420.30 00:20:29.537 clat (usec): min=831, max=77972, avg=4932.99, stdev=6637.64 00:20:29.537 lat (usec): min=835, max=77977, avg=4938.30, stdev=6637.88 00:20:29.537 clat percentiles (usec): 00:20:29.537 | 1.00th=[ 1696], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:20:29.537 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:20:29.537 | 70.00th=[ 2769], 80.00th=[ 4178], 90.00th=[13304], 95.00th=[19530], 00:20:29.537 | 99.00th=[27919], 99.50th=[32113], 99.90th=[76022], 99.95th=[77071], 00:20:29.537 | 99.99th=[77071] 00:20:29.537 bw ( KiB/s): min=29992, max=95704, per=100.00%, avg=64454.00, stdev=35668.52, samples=4 00:20:29.537 iops : min= 7498, max=23926, avg=16113.50, stdev=8917.13, samples=4 00:20:29.537 lat (usec) : 1000=0.05% 00:20:29.537 lat (msec) : 2=2.75%, 4=80.12%, 10=11.22%, 20=3.44%, 50=2.22% 00:20:29.537 lat (msec) : 100=0.20% 00:20:29.537 cpu : usr=99.31%, sys=0.00%, ctx=6, majf=0, minf=606 00:20:29.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:29.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.537 issued rwts: total=32261,32320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.537 00:20:29.537 Run status group 0 (all jobs): 00:20:29.537 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=126MiB (132MB), run=2021-2021msec 00:20:29.537 WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=126MiB (132MB), run=2021-2021msec 00:20:29.793 ----------------------------------------------------- 00:20:29.793 Suppressions used: 00:20:29.793 count bytes template 00:20:29.793 1 32 /usr/src/fio/parse.c 00:20:29.793 1 8 libtcmalloc_minimal.so 00:20:29.793 ----------------------------------------------------- 00:20:29.793 00:20:29.793 04:42:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:29.793 04:42:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:29.793 04:42:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:29.793 04:42:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:30.049 04:42:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:30.049 04:42:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:30.307 04:42:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:30.307 04:42:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:30.307 04:42:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:30.307 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:30.307 fio-3.35 00:20:30.307 Starting 1 thread 00:20:40.343 00:20:40.343 test: (groupid=0, jobs=1): err= 0: pid=64446: Wed Nov 27 04:42:46 2024 00:20:40.343 read: IOPS=21.4k, BW=83.6MiB/s (87.7MB/s)(167MiB/2001msec) 00:20:40.343 slat (nsec): min=3360, max=75150, avg=5401.54, stdev=2650.42 00:20:40.343 clat (usec): min=194, max=6843, avg=2990.67, stdev=788.04 00:20:40.343 lat (usec): min=199, max=6848, avg=2996.07, stdev=789.64 00:20:40.343 clat percentiles (usec): 00:20:40.343 | 1.00th=[ 1893], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:20:40.343 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:20:40.343 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4359], 95.00th=[ 4686], 00:20:40.343 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[ 6587], 00:20:40.343 | 99.99th=[ 6718] 00:20:40.344 bw ( KiB/s): min=83856, max=90160, per=100.00%, avg=86738.67, stdev=3186.33, samples=3 00:20:40.344 iops : min=20964, max=22540, avg=21684.67, stdev=796.58, samples=3 00:20:40.344 write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:20:40.344 slat (nsec): min=3533, max=63291, avg=5775.93, stdev=2620.29 00:20:40.344 clat (usec): min=219, max=6921, avg=2993.12, stdev=786.00 00:20:40.344 lat (usec): min=224, max=6926, avg=2998.90, stdev=787.60 00:20:40.344 clat percentiles (usec): 00:20:40.344 | 1.00th=[ 1942], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:20:40.344 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:20:40.344 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4359], 95.00th=[ 4686], 00:20:40.344 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 6652], 00:20:40.344 | 99.99th=[ 6783] 00:20:40.344 bw ( KiB/s): min=84128, max=90864, per=100.00%, avg=86930.67, stdev=3507.45, samples=3 00:20:40.344 iops : min=21032, max=22716, avg=21732.67, stdev=876.86, samples=3 00:20:40.344 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:20:40.344 lat (msec) : 2=1.21%, 4=84.91%, 10=13.82% 00:20:40.344 cpu : usr=99.20%, sys=0.05%, ctx=4, majf=0, minf=605 00:20:40.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:40.344 issued rwts: total=42836,42506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:40.344 00:20:40.344 Run status group 0 (all jobs): 00:20:40.344 READ: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=167MiB (175MB), run=2001-2001msec 00:20:40.344 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:20:40.344 ----------------------------------------------------- 00:20:40.344 Suppressions used: 00:20:40.344 count bytes template 00:20:40.344 1 32 /usr/src/fio/parse.c 00:20:40.344 1 8 libtcmalloc_minimal.so 00:20:40.344 ----------------------------------------------------- 00:20:40.344 00:20:40.344 04:42:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:40.344 04:42:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:40.344 00:20:40.344 real 0m31.526s 00:20:40.344 user 0m16.217s 00:20:40.344 sys 0m28.717s 00:20:40.344 04:42:46 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.344 04:42:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:40.344 ************************************ 00:20:40.344 END TEST nvme_fio 00:20:40.344 ************************************ 00:20:40.344 00:20:40.344 real 1m40.742s 00:20:40.344 user 3m37.570s 00:20:40.344 sys 0m39.188s 00:20:40.344 04:42:46 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.344 04:42:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.344 ************************************ 00:20:40.344 END TEST nvme 00:20:40.344 ************************************ 00:20:40.344 04:42:46 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:20:40.344 04:42:46 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:40.344 04:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.344 04:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.344 04:42:46 -- common/autotest_common.sh@10 -- # set +x 00:20:40.344 ************************************ 00:20:40.344 START TEST nvme_scc 00:20:40.344 ************************************ 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:40.344 * Looking for test storage... 00:20:40.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@345 -- # : 1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@368 -- # return 0 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.344 --rc genhtml_branch_coverage=1 00:20:40.344 --rc genhtml_function_coverage=1 00:20:40.344 --rc genhtml_legend=1 00:20:40.344 --rc geninfo_all_blocks=1 00:20:40.344 --rc geninfo_unexecuted_blocks=1 00:20:40.344 00:20:40.344 ' 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.344 --rc genhtml_branch_coverage=1 00:20:40.344 --rc genhtml_function_coverage=1 00:20:40.344 --rc genhtml_legend=1 00:20:40.344 --rc geninfo_all_blocks=1 00:20:40.344 --rc geninfo_unexecuted_blocks=1 00:20:40.344 00:20:40.344 ' 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.344 --rc genhtml_branch_coverage=1 00:20:40.344 --rc genhtml_function_coverage=1 00:20:40.344 --rc genhtml_legend=1 00:20:40.344 --rc geninfo_all_blocks=1 00:20:40.344 --rc geninfo_unexecuted_blocks=1 00:20:40.344 00:20:40.344 ' 00:20:40.344 04:42:46 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.344 --rc genhtml_branch_coverage=1 00:20:40.344 --rc genhtml_function_coverage=1 00:20:40.344 --rc genhtml_legend=1 00:20:40.344 --rc geninfo_all_blocks=1 00:20:40.344 --rc geninfo_unexecuted_blocks=1 00:20:40.344 00:20:40.344 ' 00:20:40.344 04:42:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.344 04:42:46 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.344 04:42:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.344 04:42:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.344 04:42:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.344 04:42:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:20:40.344 04:42:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:40.344 04:42:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:40.345 04:42:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:40.345 04:42:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:40.345 04:42:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:40.345 04:42:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:20:40.345 04:42:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:20:40.345 04:42:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:40.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.345 Waiting for block devices as requested 00:20:40.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.345 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.345 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.689 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:45.689 04:42:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:45.689 04:42:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:45.689 04:42:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:45.689 04:42:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:45.689 04:42:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:45.689 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:45.690 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:45.691 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.692 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.693 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:45.694 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:45.695 04:42:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:45.695 04:42:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:45.695 04:42:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:45.695 04:42:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.695 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:45.696 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.697 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:20:45.698 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.699 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.700 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:45.701 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:45.702 04:42:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:45.702 04:42:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:20:45.702 04:42:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:45.702 04:42:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:45.702 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.703 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.704 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:20:45.705 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.706 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.707 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:20:45.708 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.709 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.710 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.711 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.712 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:45.713 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:45.714 04:42:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:45.714 04:42:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:20:45.714 04:42:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:45.714 04:42:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:45.714 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:45.715 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:45.716 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:45.717 04:42:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:20:45.717 04:42:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:20:45.718 04:42:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:20:45.718 04:42:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:20:45.718 04:42:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:20:45.718 04:42:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:45.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.490 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.490 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.490 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.490 04:42:53 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:46.490 04:42:53 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:46.490 04:42:53 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.490 04:42:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:46.490 ************************************ 00:20:46.490 START TEST nvme_simple_copy 00:20:46.490 ************************************ 00:20:46.490 04:42:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:46.747 Initializing NVMe Controllers 00:20:46.747 Attaching to 0000:00:10.0 00:20:46.747 Controller supports SCC. Attached to 0000:00:10.0 00:20:46.747 Namespace ID: 1 size: 6GB 00:20:46.747 Initialization complete. 00:20:46.747 00:20:46.747 Controller QEMU NVMe Ctrl (12340 ) 00:20:46.747 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:20:46.747 Namespace Block Size:4096 00:20:46.747 Writing LBAs 0 to 63 with Random Data 00:20:46.747 Copied LBAs from 0 - 63 to the Destination LBA 256 00:20:46.747 LBAs matching Written Data: 64 00:20:46.747 00:20:46.747 real 0m0.258s 00:20:46.747 user 0m0.084s 00:20:46.747 sys 0m0.073s 00:20:46.747 ************************************ 00:20:46.747 END TEST nvme_simple_copy 00:20:46.747 ************************************ 00:20:46.747 04:42:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.747 04:42:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:20:46.747 ************************************ 00:20:46.747 END TEST nvme_scc 00:20:46.747 ************************************ 00:20:46.747 00:20:46.747 real 0m7.583s 00:20:46.747 user 0m1.059s 00:20:46.747 sys 0m1.296s 00:20:46.747 04:42:53 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.747 04:42:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:46.747 04:42:53 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:20:46.747 04:42:53 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:20:46.747 04:42:53 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:20:46.747 04:42:53 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:20:46.747 04:42:53 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:20:46.747 04:42:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.747 04:42:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.747 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:20:46.747 ************************************ 00:20:46.747 START TEST nvme_fdp 00:20:46.747 ************************************ 00:20:46.747 04:42:53 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:20:47.082 * Looking for test storage... 00:20:47.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:47.082 04:42:53 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.082 04:42:53 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.082 04:42:53 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.082 04:42:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.082 04:42:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:20:47.082 04:42:54 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.083 04:42:54 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.083 --rc genhtml_branch_coverage=1 00:20:47.083 --rc genhtml_function_coverage=1 00:20:47.083 --rc genhtml_legend=1 00:20:47.083 --rc geninfo_all_blocks=1 00:20:47.083 --rc geninfo_unexecuted_blocks=1 00:20:47.083 00:20:47.083 ' 00:20:47.083 04:42:54 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.083 --rc genhtml_branch_coverage=1 00:20:47.083 --rc genhtml_function_coverage=1 00:20:47.083 --rc genhtml_legend=1 00:20:47.083 --rc geninfo_all_blocks=1 00:20:47.083 --rc geninfo_unexecuted_blocks=1 00:20:47.083 00:20:47.083 ' 00:20:47.083 04:42:54 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.083 --rc genhtml_branch_coverage=1 00:20:47.083 --rc genhtml_function_coverage=1 00:20:47.083 --rc genhtml_legend=1 00:20:47.083 --rc geninfo_all_blocks=1 00:20:47.083 --rc geninfo_unexecuted_blocks=1 00:20:47.083 00:20:47.083 ' 00:20:47.083 04:42:54 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.083 --rc genhtml_branch_coverage=1 00:20:47.083 --rc genhtml_function_coverage=1 00:20:47.083 --rc genhtml_legend=1 00:20:47.083 --rc geninfo_all_blocks=1 00:20:47.083 --rc geninfo_unexecuted_blocks=1 00:20:47.083 00:20:47.083 ' 00:20:47.083 04:42:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.083 04:42:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.083 04:42:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.083 04:42:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.083 04:42:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.083 04:42:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.083 04:42:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.083 04:42:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.083 04:42:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:20:47.083 04:42:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:47.083 04:42:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:20:47.083 04:42:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.083 04:42:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:47.342 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.342 Waiting for block devices as requested 00:20:47.342 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.600 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.600 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.600 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.867 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:52.867 04:42:59 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:52.867 04:42:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.867 04:42:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:52.867 04:42:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.867 04:42:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.867 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.868 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.869 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.870 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:52.871 04:42:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.871 04:42:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:52.871 04:42:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.871 04:42:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:52.871 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.872 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.873 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.874 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:52.875 04:42:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.875 04:42:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:20:52.875 04:42:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.875 04:42:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.875 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:52.876 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:20:52.877 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:20:52.879 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:42:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:52.880 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.143 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:53.144 04:43:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:53.144 04:43:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:20:53.144 04:43:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:53.144 04:43:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:53.144 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:53.145 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.146 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:53.147 04:43:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:20:53.147 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:20:53.148 04:43:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:20:53.148 04:43:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:20:53.148 04:43:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.973 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.973 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.973 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.973 04:43:01 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:53.973 04:43:01 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:53.973 04:43:01 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.973 04:43:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 ************************************ 00:20:53.973 START TEST nvme_flexible_data_placement 00:20:53.973 ************************************ 00:20:53.973 04:43:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:54.232 Initializing NVMe Controllers 00:20:54.232 Attaching to 0000:00:13.0 00:20:54.232 Controller supports FDP Attached to 0000:00:13.0 00:20:54.232 Namespace ID: 1 Endurance Group ID: 1 00:20:54.232 Initialization complete. 00:20:54.232 00:20:54.232 ================================== 00:20:54.232 == FDP tests for Namespace: #01 == 00:20:54.232 ================================== 00:20:54.232 00:20:54.232 Get Feature: FDP: 00:20:54.232 ================= 00:20:54.232 Enabled: Yes 00:20:54.232 FDP configuration Index: 0 00:20:54.232 00:20:54.232 FDP configurations log page 00:20:54.232 =========================== 00:20:54.232 Number of FDP configurations: 1 00:20:54.232 Version: 0 00:20:54.232 Size: 112 00:20:54.232 FDP Configuration Descriptor: 0 00:20:54.232 Descriptor Size: 96 00:20:54.232 Reclaim Group Identifier format: 2 00:20:54.232 FDP Volatile Write Cache: Not Present 00:20:54.232 FDP Configuration: Valid 00:20:54.232 Vendor Specific Size: 0 00:20:54.232 Number of Reclaim Groups: 2 00:20:54.232 Number of Recalim Unit Handles: 8 00:20:54.232 Max Placement Identifiers: 128 00:20:54.232 Number of Namespaces Suppprted: 256 00:20:54.232 Reclaim unit Nominal Size: 6000000 bytes 00:20:54.232 Estimated Reclaim Unit Time Limit: Not Reported 00:20:54.232 RUH Desc #000: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #001: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #002: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #003: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #004: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #005: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #006: RUH Type: Initially Isolated 00:20:54.232 RUH Desc #007: RUH Type: Initially Isolated 00:20:54.232 00:20:54.232 FDP reclaim unit handle usage log page 00:20:54.232 ====================================== 00:20:54.232 Number of Reclaim Unit Handles: 8 00:20:54.232 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:54.232 RUH Usage Desc #001: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #002: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #003: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #004: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #005: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #006: RUH Attributes: Unused 00:20:54.232 RUH Usage Desc #007: RUH Attributes: Unused 00:20:54.232 00:20:54.232 FDP statistics log page 00:20:54.232 ======================= 00:20:54.232 Host bytes with metadata written: 1001213952 00:20:54.232 Media bytes with metadata written: 1001451520 00:20:54.232 Media bytes erased: 0 00:20:54.232 00:20:54.232 FDP Reclaim unit handle status 00:20:54.232 ============================== 00:20:54.232 Number of RUHS descriptors: 2 00:20:54.232 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000052b 00:20:54.232 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:20:54.232 00:20:54.232 FDP write on placement id: 0 success 00:20:54.232 00:20:54.232 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:20:54.232 00:20:54.232 IO mgmt send: RUH update for Placement ID: #0 Success 00:20:54.232 00:20:54.232 Get Feature: FDP Events for Placement handle: #0 00:20:54.232 ======================== 00:20:54.232 Number of FDP Events: 6 00:20:54.232 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:20:54.232 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:20:54.232 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:20:54.232 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:20:54.232 FDP Event: #4 Type: Media Reallocated Enabled: No 00:20:54.232 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:20:54.232 00:20:54.232 FDP events log page 00:20:54.232 =================== 00:20:54.232 Number of FDP events: 1 00:20:54.232 FDP Event #0: 00:20:54.232 Event Type: RU Not Written to Capacity 00:20:54.232 Placement Identifier: Valid 00:20:54.232 NSID: Valid 00:20:54.232 Location: Valid 00:20:54.232 Placement Identifier: 0 00:20:54.232 Event Timestamp: 5 00:20:54.232 Namespace Identifier: 1 00:20:54.232 Reclaim Group Identifier: 0 00:20:54.232 Reclaim Unit Handle Identifier: 0 00:20:54.232 00:20:54.232 FDP test passed 00:20:54.232 00:20:54.232 real 0m0.233s 00:20:54.232 user 0m0.068s 00:20:54.232 sys 0m0.065s 00:20:54.232 04:43:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.232 04:43:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:20:54.232 ************************************ 00:20:54.232 END TEST nvme_flexible_data_placement 00:20:54.232 ************************************ 00:20:54.232 00:20:54.232 real 0m7.459s 00:20:54.232 user 0m1.082s 00:20:54.232 sys 0m1.343s 00:20:54.232 04:43:01 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.232 04:43:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:54.232 ************************************ 00:20:54.232 END TEST nvme_fdp 00:20:54.232 ************************************ 00:20:54.232 04:43:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:20:54.232 04:43:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:54.232 04:43:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:54.232 04:43:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.232 04:43:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.232 ************************************ 00:20:54.232 START TEST nvme_rpc 00:20:54.232 ************************************ 00:20:54.232 04:43:01 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:54.491 * Looking for test storage... 00:20:54.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.491 04:43:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.491 --rc genhtml_branch_coverage=1 00:20:54.491 --rc genhtml_function_coverage=1 00:20:54.491 --rc genhtml_legend=1 00:20:54.491 --rc geninfo_all_blocks=1 00:20:54.491 --rc geninfo_unexecuted_blocks=1 00:20:54.491 00:20:54.491 ' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.491 --rc genhtml_branch_coverage=1 00:20:54.491 --rc genhtml_function_coverage=1 00:20:54.491 --rc genhtml_legend=1 00:20:54.491 --rc geninfo_all_blocks=1 00:20:54.491 --rc geninfo_unexecuted_blocks=1 00:20:54.491 00:20:54.491 ' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.491 --rc genhtml_branch_coverage=1 00:20:54.491 --rc genhtml_function_coverage=1 00:20:54.491 --rc genhtml_legend=1 00:20:54.491 --rc geninfo_all_blocks=1 00:20:54.491 --rc geninfo_unexecuted_blocks=1 00:20:54.491 00:20:54.491 ' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.491 --rc genhtml_branch_coverage=1 00:20:54.491 --rc genhtml_function_coverage=1 00:20:54.491 --rc genhtml_legend=1 00:20:54.491 --rc geninfo_all_blocks=1 00:20:54.491 --rc geninfo_unexecuted_blocks=1 00:20:54.491 00:20:54.491 ' 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65829 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:54.491 04:43:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65829 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65829 ']' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.491 04:43:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.491 [2024-11-27 04:43:01.681434] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:20:54.492 [2024-11-27 04:43:01.681555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65829 ] 00:20:54.750 [2024-11-27 04:43:01.838753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:54.750 [2024-11-27 04:43:01.942269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.750 [2024-11-27 04:43:01.942472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.683 04:43:02 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.683 04:43:02 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:55.683 04:43:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:55.683 Nvme0n1 00:20:55.683 04:43:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:55.683 04:43:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:55.941 request: 00:20:55.941 { 00:20:55.941 "bdev_name": "Nvme0n1", 00:20:55.941 "filename": "non_existing_file", 00:20:55.941 "method": "bdev_nvme_apply_firmware", 00:20:55.941 "req_id": 1 00:20:55.941 } 00:20:55.941 Got JSON-RPC error response 00:20:55.941 response: 00:20:55.941 { 00:20:55.941 "code": -32603, 00:20:55.941 "message": "open file failed." 00:20:55.941 } 00:20:55.941 04:43:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:55.941 04:43:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:55.941 04:43:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:56.199 04:43:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:56.199 04:43:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65829 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65829 ']' 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65829 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65829 00:20:56.199 killing process with pid 65829 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65829' 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65829 00:20:56.199 04:43:03 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65829 00:20:57.573 00:20:57.573 real 0m3.291s 00:20:57.573 user 0m6.251s 00:20:57.573 sys 0m0.515s 00:20:57.573 04:43:04 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.573 04:43:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.573 ************************************ 00:20:57.573 END TEST nvme_rpc 00:20:57.573 ************************************ 00:20:57.573 04:43:04 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:57.573 04:43:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.573 04:43:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.573 04:43:04 -- common/autotest_common.sh@10 -- # set +x 00:20:57.573 ************************************ 00:20:57.573 START TEST nvme_rpc_timeouts 00:20:57.573 ************************************ 00:20:57.573 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:57.833 * Looking for test storage... 00:20:57.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.833 04:43:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.833 --rc genhtml_branch_coverage=1 00:20:57.833 --rc genhtml_function_coverage=1 00:20:57.833 --rc genhtml_legend=1 00:20:57.833 --rc geninfo_all_blocks=1 00:20:57.833 --rc geninfo_unexecuted_blocks=1 00:20:57.833 00:20:57.833 ' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.833 --rc genhtml_branch_coverage=1 00:20:57.833 --rc genhtml_function_coverage=1 00:20:57.833 --rc genhtml_legend=1 00:20:57.833 --rc geninfo_all_blocks=1 00:20:57.833 --rc geninfo_unexecuted_blocks=1 00:20:57.833 00:20:57.833 ' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.833 --rc genhtml_branch_coverage=1 00:20:57.833 --rc genhtml_function_coverage=1 00:20:57.833 --rc genhtml_legend=1 00:20:57.833 --rc geninfo_all_blocks=1 00:20:57.833 --rc geninfo_unexecuted_blocks=1 00:20:57.833 00:20:57.833 ' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.833 --rc genhtml_branch_coverage=1 00:20:57.833 --rc genhtml_function_coverage=1 00:20:57.833 --rc genhtml_legend=1 00:20:57.833 --rc geninfo_all_blocks=1 00:20:57.833 --rc geninfo_unexecuted_blocks=1 00:20:57.833 00:20:57.833 ' 00:20:57.833 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:57.833 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65894 00:20:57.833 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65894 00:20:57.833 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65926 00:20:57.834 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:57.834 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65926 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65926 ']' 00:20:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.834 04:43:04 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:57.834 04:43:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:57.834 [2024-11-27 04:43:04.950844] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:20:57.834 [2024-11-27 04:43:04.950967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65926 ] 00:20:58.092 [2024-11-27 04:43:05.108693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:58.092 [2024-11-27 04:43:05.211692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.092 [2024-11-27 04:43:05.211856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.658 Checking default timeout settings: 00:20:58.658 04:43:05 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.658 04:43:05 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:20:58.658 04:43:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:20:58.658 04:43:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:58.916 Making settings changes with rpc: 00:20:58.916 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:20:58.916 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:20:59.174 Check default vs. modified settings: 00:20:59.174 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:20:59.174 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:59.432 Setting action_on_timeout is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:59.432 Setting timeout_us is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:20:59.432 Setting timeout_admin_us is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65894 /tmp/settings_modified_65894 00:20:59.432 04:43:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65926 00:20:59.432 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65926 ']' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65926 00:20:59.432 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:20:59.432 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.432 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65926 00:20:59.691 killing process with pid 65926 00:20:59.691 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.691 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.691 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65926' 00:20:59.691 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65926 00:20:59.691 04:43:06 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65926 00:21:01.063 RPC TIMEOUT SETTING TEST PASSED. 00:21:01.063 04:43:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:21:01.063 00:21:01.063 real 0m3.337s 00:21:01.063 user 0m6.393s 00:21:01.063 sys 0m0.497s 00:21:01.063 ************************************ 00:21:01.063 END TEST nvme_rpc_timeouts 00:21:01.063 ************************************ 00:21:01.063 04:43:08 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.063 04:43:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:01.063 04:43:08 -- spdk/autotest.sh@239 -- # uname -s 00:21:01.063 04:43:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:21:01.063 04:43:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:01.063 04:43:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:01.063 04:43:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.063 04:43:08 -- common/autotest_common.sh@10 -- # set +x 00:21:01.063 ************************************ 00:21:01.063 START TEST sw_hotplug 00:21:01.063 ************************************ 00:21:01.063 04:43:08 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:01.063 * Looking for test storage... 00:21:01.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:01.063 04:43:08 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:01.063 04:43:08 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:01.063 04:43:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:21:01.063 04:43:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.063 04:43:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.064 04:43:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:21:01.064 04:43:08 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.064 04:43:08 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.064 --rc genhtml_branch_coverage=1 00:21:01.064 --rc genhtml_function_coverage=1 00:21:01.064 --rc genhtml_legend=1 00:21:01.064 --rc geninfo_all_blocks=1 00:21:01.064 --rc geninfo_unexecuted_blocks=1 00:21:01.064 00:21:01.064 ' 00:21:01.064 04:43:08 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.064 --rc genhtml_branch_coverage=1 00:21:01.064 --rc genhtml_function_coverage=1 00:21:01.064 --rc genhtml_legend=1 00:21:01.064 --rc geninfo_all_blocks=1 00:21:01.064 --rc geninfo_unexecuted_blocks=1 00:21:01.064 00:21:01.064 ' 00:21:01.064 04:43:08 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.064 --rc genhtml_branch_coverage=1 00:21:01.064 --rc genhtml_function_coverage=1 00:21:01.064 --rc genhtml_legend=1 00:21:01.064 --rc geninfo_all_blocks=1 00:21:01.064 --rc geninfo_unexecuted_blocks=1 00:21:01.064 00:21:01.064 ' 00:21:01.064 04:43:08 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.064 --rc genhtml_branch_coverage=1 00:21:01.064 --rc genhtml_function_coverage=1 00:21:01.064 --rc genhtml_legend=1 00:21:01.064 --rc geninfo_all_blocks=1 00:21:01.064 --rc geninfo_unexecuted_blocks=1 00:21:01.064 00:21:01.064 ' 00:21:01.064 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.630 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.630 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.630 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.630 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.630 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:21:01.630 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:21:01.630 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:21:01.630 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@233 -- # local class 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:21:01.630 04:43:08 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:21:01.631 04:43:08 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:01.631 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:21:01.631 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:21:01.631 04:43:08 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:01.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.147 Waiting for block devices as requested 00:21:02.147 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.147 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.147 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.405 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:07.667 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:07.667 04:43:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:21:07.667 04:43:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:07.667 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:21:07.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:07.667 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:21:07.924 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:21:08.181 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.181 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.181 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:21:08.181 04:43:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66776 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:21:08.439 04:43:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:21:08.439 04:43:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:21:08.439 04:43:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:21:08.439 04:43:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:21:08.439 04:43:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:08.439 04:43:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:08.439 Initializing NVMe Controllers 00:21:08.439 Attaching to 0000:00:10.0 00:21:08.439 Attaching to 0000:00:11.0 00:21:08.439 Attached to 0000:00:10.0 00:21:08.439 Attached to 0000:00:11.0 00:21:08.439 Initialization complete. Starting I/O... 00:21:08.439 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:21:08.439 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:21:08.439 00:21:09.811 QEMU NVMe Ctrl (12340 ): 2538 I/Os completed (+2538) 00:21:09.811 QEMU NVMe Ctrl (12341 ): 2661 I/Os completed (+2661) 00:21:09.811 00:21:10.745 QEMU NVMe Ctrl (12340 ): 6049 I/Os completed (+3511) 00:21:10.745 QEMU NVMe Ctrl (12341 ): 6266 I/Os completed (+3605) 00:21:10.745 00:21:11.704 QEMU NVMe Ctrl (12340 ): 9722 I/Os completed (+3673) 00:21:11.704 QEMU NVMe Ctrl (12341 ): 9958 I/Os completed (+3692) 00:21:11.704 00:21:12.637 QEMU NVMe Ctrl (12340 ): 13306 I/Os completed (+3584) 00:21:12.637 QEMU NVMe Ctrl (12341 ): 13539 I/Os completed (+3581) 00:21:12.637 00:21:13.570 QEMU NVMe Ctrl (12340 ): 16491 I/Os completed (+3185) 00:21:13.570 QEMU NVMe Ctrl (12341 ): 16907 I/Os completed (+3368) 00:21:13.570 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:14.534 [2024-11-27 04:43:21.417700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:14.534 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:14.534 [2024-11-27 04:43:21.418925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.418993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.419020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.419049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:14.534 [2024-11-27 04:43:21.421007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.421078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.421105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.421125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:14.534 [2024-11-27 04:43:21.430470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:14.534 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:14.534 [2024-11-27 04:43:21.431616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.431664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.431695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.431722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:14.534 [2024-11-27 04:43:21.433520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.433564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.433594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 [2024-11-27 04:43:21.433619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:14.534 EAL: Cannot open sysfs resource 00:21:14.534 EAL: pci_scan_one(): cannot parse resource 00:21:14.534 EAL: Scan for (pci) bus failed. 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:14.534 Attaching to 0000:00:10.0 00:21:14.534 Attached to 0000:00:10.0 00:21:14.534 QEMU NVMe Ctrl (12340 ): 120 I/Os completed (+120) 00:21:14.534 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:14.534 04:43:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:14.534 Attaching to 0000:00:11.0 00:21:14.534 Attached to 0000:00:11.0 00:21:15.518 QEMU NVMe Ctrl (12340 ): 3277 I/Os completed (+3157) 00:21:15.518 QEMU NVMe Ctrl (12341 ): 3106 I/Os completed (+3106) 00:21:15.518 00:21:16.448 QEMU NVMe Ctrl (12340 ): 6520 I/Os completed (+3243) 00:21:16.448 QEMU NVMe Ctrl (12341 ): 6365 I/Os completed (+3259) 00:21:16.448 00:21:17.816 QEMU NVMe Ctrl (12340 ): 9858 I/Os completed (+3338) 00:21:17.816 QEMU NVMe Ctrl (12341 ): 9995 I/Os completed (+3630) 00:21:17.816 00:21:18.749 QEMU NVMe Ctrl (12340 ): 12979 I/Os completed (+3121) 00:21:18.749 QEMU NVMe Ctrl (12341 ): 13131 I/Os completed (+3136) 00:21:18.749 00:21:19.703 QEMU NVMe Ctrl (12340 ): 16665 I/Os completed (+3686) 00:21:19.703 QEMU NVMe Ctrl (12341 ): 16814 I/Os completed (+3683) 00:21:19.703 00:21:20.636 QEMU NVMe Ctrl (12340 ): 19792 I/Os completed (+3127) 00:21:20.636 QEMU NVMe Ctrl (12341 ): 19981 I/Os completed (+3167) 00:21:20.636 00:21:21.570 QEMU NVMe Ctrl (12340 ): 23310 I/Os completed (+3518) 00:21:21.571 QEMU NVMe Ctrl (12341 ): 23500 I/Os completed (+3519) 00:21:21.571 00:21:22.507 QEMU NVMe Ctrl (12340 ): 26926 I/Os completed (+3616) 00:21:22.507 QEMU NVMe Ctrl (12341 ): 27226 I/Os completed (+3726) 00:21:22.507 00:21:23.454 QEMU NVMe Ctrl (12340 ): 30124 I/Os completed (+3198) 00:21:23.454 QEMU NVMe Ctrl (12341 ): 30442 I/Os completed (+3216) 00:21:23.454 00:21:24.840 QEMU NVMe Ctrl (12340 ): 33254 I/Os completed (+3130) 00:21:24.840 QEMU NVMe Ctrl (12341 ): 33571 I/Os completed (+3129) 00:21:24.840 00:21:25.406 QEMU NVMe Ctrl (12340 ): 36840 I/Os completed (+3586) 00:21:25.406 QEMU NVMe Ctrl (12341 ): 37145 I/Os completed (+3574) 00:21:25.406 00:21:26.780 QEMU NVMe Ctrl (12340 ): 40383 I/Os completed (+3543) 00:21:26.780 QEMU NVMe Ctrl (12341 ): 40749 I/Os completed (+3604) 00:21:26.780 00:21:26.780 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:26.780 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:26.780 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:26.780 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:26.780 [2024-11-27 04:43:33.640390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:26.780 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:26.780 [2024-11-27 04:43:33.641367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.780 [2024-11-27 04:43:33.641409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.780 [2024-11-27 04:43:33.641431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.780 [2024-11-27 04:43:33.641446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.780 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:26.780 [2024-11-27 04:43:33.643017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.780 [2024-11-27 04:43:33.643058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.643097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.643110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:26.781 [2024-11-27 04:43:33.661775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:26.781 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:26.781 [2024-11-27 04:43:33.662642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.662671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.662688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.662701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:26.781 [2024-11-27 04:43:33.664040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.664082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.664095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 [2024-11-27 04:43:33.664109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:26.781 Attaching to 0000:00:10.0 00:21:26.781 Attached to 0000:00:10.0 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:26.781 04:43:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:26.781 Attaching to 0000:00:11.0 00:21:26.781 Attached to 0000:00:11.0 00:21:27.726 QEMU NVMe Ctrl (12340 ): 2514 I/Os completed (+2514) 00:21:27.726 QEMU NVMe Ctrl (12341 ): 2253 I/Os completed (+2253) 00:21:27.726 00:21:28.659 QEMU NVMe Ctrl (12340 ): 6007 I/Os completed (+3493) 00:21:28.659 QEMU NVMe Ctrl (12341 ): 5726 I/Os completed (+3473) 00:21:28.659 00:21:29.594 QEMU NVMe Ctrl (12340 ): 9666 I/Os completed (+3659) 00:21:29.594 QEMU NVMe Ctrl (12341 ): 9370 I/Os completed (+3644) 00:21:29.594 00:21:30.528 QEMU NVMe Ctrl (12340 ): 13256 I/Os completed (+3590) 00:21:30.528 QEMU NVMe Ctrl (12341 ): 12971 I/Os completed (+3601) 00:21:30.528 00:21:31.461 QEMU NVMe Ctrl (12340 ): 16861 I/Os completed (+3605) 00:21:31.461 QEMU NVMe Ctrl (12341 ): 16576 I/Os completed (+3605) 00:21:31.461 00:21:32.835 QEMU NVMe Ctrl (12340 ): 20427 I/Os completed (+3566) 00:21:32.835 QEMU NVMe Ctrl (12341 ): 20158 I/Os completed (+3582) 00:21:32.835 00:21:33.767 QEMU NVMe Ctrl (12340 ): 23936 I/Os completed (+3509) 00:21:33.767 QEMU NVMe Ctrl (12341 ): 23668 I/Os completed (+3510) 00:21:33.767 00:21:34.700 QEMU NVMe Ctrl (12340 ): 27526 I/Os completed (+3590) 00:21:34.700 QEMU NVMe Ctrl (12341 ): 27293 I/Os completed (+3625) 00:21:34.700 00:21:35.633 QEMU NVMe Ctrl (12340 ): 31149 I/Os completed (+3623) 00:21:35.633 QEMU NVMe Ctrl (12341 ): 30926 I/Os completed (+3633) 00:21:35.633 00:21:36.568 QEMU NVMe Ctrl (12340 ): 34382 I/Os completed (+3233) 00:21:36.568 QEMU NVMe Ctrl (12341 ): 34235 I/Os completed (+3309) 00:21:36.568 00:21:37.502 QEMU NVMe Ctrl (12340 ): 37465 I/Os completed (+3083) 00:21:37.502 QEMU NVMe Ctrl (12341 ): 37314 I/Os completed (+3079) 00:21:37.502 00:21:38.435 QEMU NVMe Ctrl (12340 ): 40542 I/Os completed (+3077) 00:21:38.435 QEMU NVMe Ctrl (12341 ): 40344 I/Os completed (+3030) 00:21:38.435 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:39.001 [2024-11-27 04:43:45.918085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:39.001 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:39.001 [2024-11-27 04:43:45.919256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.919309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.919326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.919343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:39.001 [2024-11-27 04:43:45.921322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.921363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.921376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.921390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:39.001 [2024-11-27 04:43:45.941350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:39.001 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:39.001 [2024-11-27 04:43:45.942425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.942464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.942482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.942497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:39.001 [2024-11-27 04:43:45.944183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.944219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.944236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 [2024-11-27 04:43:45.944248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:39.001 04:43:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:39.001 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:21:39.001 EAL: Scan for (pci) bus failed. 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:39.001 Attaching to 0000:00:10.0 00:21:39.001 Attached to 0000:00:10.0 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:39.001 04:43:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:39.001 Attaching to 0000:00:11.0 00:21:39.001 Attached to 0000:00:11.0 00:21:39.001 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:39.001 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:39.001 [2024-11-27 04:43:46.180634] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:21:51.194 04:43:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:51.194 04:43:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:51.194 04:43:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.76 00:21:51.194 04:43:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.76 00:21:51.194 04:43:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:21:51.194 04:43:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.76 00:21:51.194 04:43:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.76 2 00:21:51.194 remove_attach_helper took 42.76s to complete (handling 2 nvme drive(s)) 04:43:58 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:21:57.751 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66776 00:21:57.752 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66776) - No such process 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66776 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67325 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67325 00:21:57.752 04:44:04 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67325 ']' 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.752 04:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:57.752 [2024-11-27 04:44:04.260470] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:21:57.752 [2024-11-27 04:44:04.260594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67325 ] 00:21:57.752 [2024-11-27 04:44:04.415374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.752 [2024-11-27 04:44:04.513972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.009 04:44:05 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:21:58.010 04:44:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:58.010 04:44:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:04.581 [2024-11-27 04:44:11.208351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:04.581 [2024-11-27 04:44:11.209773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.209811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.209826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.209844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.209852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.209861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.209869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.209877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.209883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.209895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.209902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.209910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.608345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:04.581 [2024-11-27 04:44:11.609742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.609778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.609791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.609807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.609816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.609823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.609831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.609838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.609846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 [2024-11-27 04:44:11.609853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:04.581 [2024-11-27 04:44:11.609861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.581 [2024-11-27 04:44:11.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:04.581 04:44:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:04.581 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:04.837 04:44:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:17.067 04:44:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:17.067 04:44:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.067 04:44:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:17.067 04:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:17.067 04:44:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.067 04:44:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:17.067 04:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:17.067 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:17.067 [2024-11-27 04:44:24.108547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:17.067 [2024-11-27 04:44:24.109907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.067 [2024-11-27 04:44:24.109945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.067 [2024-11-27 04:44:24.109957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.067 [2024-11-27 04:44:24.109974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.067 [2024-11-27 04:44:24.109982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.067 [2024-11-27 04:44:24.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.067 [2024-11-27 04:44:24.109997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.067 [2024-11-27 04:44:24.110005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.067 [2024-11-27 04:44:24.110012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.067 [2024-11-27 04:44:24.110020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.067 [2024-11-27 04:44:24.110027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.067 [2024-11-27 04:44:24.110034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.326 [2024-11-27 04:44:24.508555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:17.326 [2024-11-27 04:44:24.509901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.326 [2024-11-27 04:44:24.509933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.326 [2024-11-27 04:44:24.509946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.326 [2024-11-27 04:44:24.509962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.326 [2024-11-27 04:44:24.509970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.326 [2024-11-27 04:44:24.509977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.326 [2024-11-27 04:44:24.509985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.326 [2024-11-27 04:44:24.509991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.326 [2024-11-27 04:44:24.509999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.326 [2024-11-27 04:44:24.510006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:17.326 [2024-11-27 04:44:24.510013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.326 [2024-11-27 04:44:24.510020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:17.584 04:44:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.584 04:44:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:17.584 04:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:17.584 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:17.842 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:17.842 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:17.842 04:44:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:30.054 04:44:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:30.054 04:44:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:30.054 [2024-11-27 04:44:37.008767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:30.054 [2024-11-27 04:44:37.010173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.054 [2024-11-27 04:44:37.010211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.054 [2024-11-27 04:44:37.010223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.054 [2024-11-27 04:44:37.010241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.054 [2024-11-27 04:44:37.010249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.054 [2024-11-27 04:44:37.010259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.054 [2024-11-27 04:44:37.010266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.054 [2024-11-27 04:44:37.010275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.054 [2024-11-27 04:44:37.010282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.054 [2024-11-27 04:44:37.010290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.054 [2024-11-27 04:44:37.010297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.054 [2024-11-27 04:44:37.010304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.312 [2024-11-27 04:44:37.408785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:30.312 [2024-11-27 04:44:37.410439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.312 [2024-11-27 04:44:37.410479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.312 [2024-11-27 04:44:37.410494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.312 [2024-11-27 04:44:37.410514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.312 [2024-11-27 04:44:37.410525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.312 [2024-11-27 04:44:37.410534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.312 [2024-11-27 04:44:37.410545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.312 [2024-11-27 04:44:37.410553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.312 [2024-11-27 04:44:37.410564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.312 [2024-11-27 04:44:37.410573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:30.312 [2024-11-27 04:44:37.410582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.312 [2024-11-27 04:44:37.410590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.312 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:30.313 04:44:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.313 04:44:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:30.313 04:44:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:30.313 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:30.571 04:44:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:22:42.769 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:22:42.769 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.769 04:44:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:42.770 04:44:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:42.770 04:44:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:49.386 04:44:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.386 04:44:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:49.386 04:44:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:49.386 04:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:49.386 [2024-11-27 04:44:55.880391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:49.386 [2024-11-27 04:44:55.881496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.386 [2024-11-27 04:44:55.881533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.386 [2024-11-27 04:44:55.881544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.386 [2024-11-27 04:44:55.881563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.386 [2024-11-27 04:44:55.881571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.386 [2024-11-27 04:44:55.881580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.386 [2024-11-27 04:44:55.881588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.386 [2024-11-27 04:44:55.881596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.386 [2024-11-27 04:44:55.881603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.386 [2024-11-27 04:44:55.881612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.386 [2024-11-27 04:44:55.881618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.386 [2024-11-27 04:44:55.881630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.386 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:49.386 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:49.386 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:49.386 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:49.387 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:49.387 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:49.387 04:44:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.387 04:44:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:49.387 04:44:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.387 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:49.387 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:49.387 [2024-11-27 04:44:56.480393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:49.387 [2024-11-27 04:44:56.481481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.387 [2024-11-27 04:44:56.481513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.387 [2024-11-27 04:44:56.481525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.387 [2024-11-27 04:44:56.481541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.387 [2024-11-27 04:44:56.481551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.387 [2024-11-27 04:44:56.481558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.387 [2024-11-27 04:44:56.481567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.387 [2024-11-27 04:44:56.481574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.387 [2024-11-27 04:44:56.481582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.387 [2024-11-27 04:44:56.481589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:49.387 [2024-11-27 04:44:56.481597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.387 [2024-11-27 04:44:56.481604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:49.953 04:44:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.953 04:44:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 04:44:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:49.953 04:44:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:49.953 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:50.211 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:50.211 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:50.211 04:44:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:02.420 04:45:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.420 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:23:02.421 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:02.421 [2024-11-27 04:45:09.281024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:02.421 [2024-11-27 04:45:09.282086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.421 [2024-11-27 04:45:09.282122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.421 [2024-11-27 04:45:09.282138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.421 [2024-11-27 04:45:09.282158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.421 [2024-11-27 04:45:09.282166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.421 [2024-11-27 04:45:09.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.421 [2024-11-27 04:45:09.282182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.421 [2024-11-27 04:45:09.282190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.421 [2024-11-27 04:45:09.282196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.421 [2024-11-27 04:45:09.282208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.421 [2024-11-27 04:45:09.282215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.421 [2024-11-27 04:45:09.282223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.683 [2024-11-27 04:45:09.681038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:23:02.683 [2024-11-27 04:45:09.682315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.683 [2024-11-27 04:45:09.682347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.683 [2024-11-27 04:45:09.682359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.683 [2024-11-27 04:45:09.682374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.683 [2024-11-27 04:45:09.682386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.683 [2024-11-27 04:45:09.682394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.683 [2024-11-27 04:45:09.682403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.683 [2024-11-27 04:45:09.682410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.683 [2024-11-27 04:45:09.682417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.683 [2024-11-27 04:45:09.682424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:02.683 [2024-11-27 04:45:09.682432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.683 [2024-11-27 04:45:09.682438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.683 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:02.684 04:45:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.684 04:45:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:02.684 04:45:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:02.684 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:02.942 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:02.942 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:02.942 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:02.942 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:02.942 04:45:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:02.942 04:45:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:02.942 04:45:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:02.942 04:45:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:15.137 04:45:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.137 04:45:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:15.137 04:45:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:15.137 [2024-11-27 04:45:22.081248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:15.137 [2024-11-27 04:45:22.082451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.137 [2024-11-27 04:45:22.082491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.137 [2024-11-27 04:45:22.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.137 [2024-11-27 04:45:22.082520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.137 [2024-11-27 04:45:22.082528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.137 [2024-11-27 04:45:22.082537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.137 [2024-11-27 04:45:22.082544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.137 [2024-11-27 04:45:22.082554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.137 [2024-11-27 04:45:22.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.137 [2024-11-27 04:45:22.082569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.137 [2024-11-27 04:45:22.082576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.137 [2024-11-27 04:45:22.082584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:15.137 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:15.138 04:45:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.138 04:45:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:15.138 04:45:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:15.138 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:15.453 [2024-11-27 04:45:22.481256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:23:15.453 [2024-11-27 04:45:22.482465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.453 [2024-11-27 04:45:22.482493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.453 [2024-11-27 04:45:22.482505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.453 [2024-11-27 04:45:22.482520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.453 [2024-11-27 04:45:22.482529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.453 [2024-11-27 04:45:22.482536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.453 [2024-11-27 04:45:22.482544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.453 [2024-11-27 04:45:22.482551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.453 [2024-11-27 04:45:22.482561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.453 [2024-11-27 04:45:22.482568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:15.453 [2024-11-27 04:45:22.482579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.453 [2024-11-27 04:45:22.482586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:15.453 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:15.453 04:45:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.453 04:45:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:15.711 04:45:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:15.711 04:45:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.14 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.14 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:23:27.901 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:27.901 04:45:34 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67325 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67325 ']' 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67325 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67325 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.901 killing process with pid 67325 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67325' 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67325 00:23:27.901 04:45:34 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67325 00:23:29.275 04:45:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.840 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.840 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.840 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:29.840 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:29.840 00:23:29.840 real 2m28.877s 00:23:29.840 user 1m51.023s 00:23:29.840 sys 0m16.518s 00:23:29.840 04:45:36 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.840 04:45:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:29.840 ************************************ 00:23:29.840 END TEST sw_hotplug 00:23:29.840 ************************************ 00:23:29.840 04:45:37 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:23:29.840 04:45:37 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:29.840 04:45:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.840 04:45:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.840 04:45:37 -- common/autotest_common.sh@10 -- # set +x 00:23:29.840 ************************************ 00:23:29.840 START TEST nvme_xnvme 00:23:29.841 ************************************ 00:23:29.841 04:45:37 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:30.101 * Looking for test storage... 00:23:30.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.101 04:45:37 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.101 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.101 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.101 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.101 04:45:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.102 04:45:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.102 --rc genhtml_branch_coverage=1 00:23:30.102 --rc genhtml_function_coverage=1 00:23:30.102 --rc genhtml_legend=1 00:23:30.102 --rc geninfo_all_blocks=1 00:23:30.102 --rc geninfo_unexecuted_blocks=1 00:23:30.102 00:23:30.102 ' 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.102 --rc genhtml_branch_coverage=1 00:23:30.102 --rc genhtml_function_coverage=1 00:23:30.102 --rc genhtml_legend=1 00:23:30.102 --rc geninfo_all_blocks=1 00:23:30.102 --rc geninfo_unexecuted_blocks=1 00:23:30.102 00:23:30.102 ' 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.102 --rc genhtml_branch_coverage=1 00:23:30.102 --rc genhtml_function_coverage=1 00:23:30.102 --rc genhtml_legend=1 00:23:30.102 --rc geninfo_all_blocks=1 00:23:30.102 --rc geninfo_unexecuted_blocks=1 00:23:30.102 00:23:30.102 ' 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.102 --rc genhtml_branch_coverage=1 00:23:30.102 --rc genhtml_function_coverage=1 00:23:30.102 --rc genhtml_legend=1 00:23:30.102 --rc geninfo_all_blocks=1 00:23:30.102 --rc geninfo_unexecuted_blocks=1 00:23:30.102 00:23:30.102 ' 00:23:30.102 04:45:37 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:23:30.102 04:45:37 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:30.102 04:45:37 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:23:30.102 04:45:37 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:23:30.103 04:45:37 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:23:30.103 04:45:37 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:30.103 04:45:37 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:30.103 #define SPDK_CONFIG_H 00:23:30.103 #define SPDK_CONFIG_AIO_FSDEV 1 00:23:30.103 #define SPDK_CONFIG_APPS 1 00:23:30.103 #define SPDK_CONFIG_ARCH native 00:23:30.103 #define SPDK_CONFIG_ASAN 1 00:23:30.103 #undef SPDK_CONFIG_AVAHI 00:23:30.103 #undef SPDK_CONFIG_CET 00:23:30.103 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:23:30.103 #define SPDK_CONFIG_COVERAGE 1 00:23:30.103 #define SPDK_CONFIG_CROSS_PREFIX 00:23:30.103 #undef SPDK_CONFIG_CRYPTO 00:23:30.103 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:30.103 #undef SPDK_CONFIG_CUSTOMOCF 00:23:30.103 #undef SPDK_CONFIG_DAOS 00:23:30.103 #define SPDK_CONFIG_DAOS_DIR 00:23:30.103 #define SPDK_CONFIG_DEBUG 1 00:23:30.103 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:30.103 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:30.103 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:30.103 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:30.103 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:30.103 #undef SPDK_CONFIG_DPDK_UADK 00:23:30.103 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:30.103 #define SPDK_CONFIG_EXAMPLES 1 00:23:30.103 #undef SPDK_CONFIG_FC 00:23:30.103 #define SPDK_CONFIG_FC_PATH 00:23:30.103 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:30.103 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:30.103 #define SPDK_CONFIG_FSDEV 1 00:23:30.103 #undef SPDK_CONFIG_FUSE 00:23:30.103 #undef SPDK_CONFIG_FUZZER 00:23:30.103 #define SPDK_CONFIG_FUZZER_LIB 00:23:30.103 #undef SPDK_CONFIG_GOLANG 00:23:30.103 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:30.103 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:30.103 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:30.103 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:23:30.103 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:30.103 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:30.103 #undef SPDK_CONFIG_HAVE_LZ4 00:23:30.103 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:23:30.103 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:23:30.103 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:30.103 #define SPDK_CONFIG_IDXD 1 00:23:30.103 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:30.103 #undef SPDK_CONFIG_IPSEC_MB 00:23:30.103 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:30.103 #define SPDK_CONFIG_ISAL 1 00:23:30.103 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:30.103 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:30.103 #define SPDK_CONFIG_LIBDIR 00:23:30.103 #undef SPDK_CONFIG_LTO 00:23:30.103 #define SPDK_CONFIG_MAX_LCORES 128 00:23:30.103 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:23:30.103 #define SPDK_CONFIG_NVME_CUSE 1 00:23:30.103 #undef SPDK_CONFIG_OCF 00:23:30.103 #define SPDK_CONFIG_OCF_PATH 00:23:30.103 #define SPDK_CONFIG_OPENSSL_PATH 00:23:30.103 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:30.103 #define SPDK_CONFIG_PGO_DIR 00:23:30.103 #undef SPDK_CONFIG_PGO_USE 00:23:30.103 #define SPDK_CONFIG_PREFIX /usr/local 00:23:30.103 #undef SPDK_CONFIG_RAID5F 00:23:30.103 #undef SPDK_CONFIG_RBD 00:23:30.103 #define SPDK_CONFIG_RDMA 1 00:23:30.103 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:30.103 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:30.103 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:30.103 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:30.103 #define SPDK_CONFIG_SHARED 1 00:23:30.103 #undef SPDK_CONFIG_SMA 00:23:30.103 #define SPDK_CONFIG_TESTS 1 00:23:30.103 #undef SPDK_CONFIG_TSAN 00:23:30.103 #define SPDK_CONFIG_UBLK 1 00:23:30.103 #define SPDK_CONFIG_UBSAN 1 00:23:30.103 #undef SPDK_CONFIG_UNIT_TESTS 00:23:30.103 #undef SPDK_CONFIG_URING 00:23:30.103 #define SPDK_CONFIG_URING_PATH 00:23:30.103 #undef SPDK_CONFIG_URING_ZNS 00:23:30.103 #undef SPDK_CONFIG_USDT 00:23:30.103 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:30.103 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:30.103 #undef SPDK_CONFIG_VFIO_USER 00:23:30.103 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:30.103 #define SPDK_CONFIG_VHOST 1 00:23:30.104 #define SPDK_CONFIG_VIRTIO 1 00:23:30.104 #undef SPDK_CONFIG_VTUNE 00:23:30.104 #define SPDK_CONFIG_VTUNE_DIR 00:23:30.104 #define SPDK_CONFIG_WERROR 1 00:23:30.104 #define SPDK_CONFIG_WPDK_DIR 00:23:30.104 #define SPDK_CONFIG_XNVME 1 00:23:30.104 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:30.104 04:45:37 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.104 04:45:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.104 04:45:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.104 04:45:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.104 04:45:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.104 04:45:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.104 04:45:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.104 04:45:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.104 04:45:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:30.104 04:45:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@68 -- # uname -s 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:23:30.104 04:45:37 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:23:30.104 04:45:37 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:30.105 04:45:37 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68673 ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68673 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.MiEv9D 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.MiEv9D/tests/xnvme /tmp/spdk.MiEv9D 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976432640 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591617536 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976432640 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:23:30.106 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591617536 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=93417041920 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6285737984 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:23:30.107 * Looking for test storage... 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976432640 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.107 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 04:45:37 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 04:45:37 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.366 04:45:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.366 04:45:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.366 04:45:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.366 04:45:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.366 04:45:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:30.366 04:45:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:23:30.366 04:45:37 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:23:30.367 04:45:37 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:30.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:30.628 Waiting for block devices as requested 00:23:30.628 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.888 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.888 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.888 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:36.148 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:36.148 04:45:43 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:23:36.148 04:45:43 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:23:36.148 04:45:43 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:23:36.406 04:45:43 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:23:36.406 No valid GPT data, bailing 00:23:36.406 04:45:43 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:23:36.406 04:45:43 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:23:36.406 04:45:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:23:36.406 04:45:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.406 04:45:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.406 04:45:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:36.406 ************************************ 00:23:36.406 START TEST xnvme_rpc 00:23:36.406 ************************************ 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:23:36.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69066 00:23:36.406 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69066 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69066 ']' 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.407 04:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:36.664 [2024-11-27 04:45:43.624089] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:23:36.664 [2024-11-27 04:45:43.624211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69066 ] 00:23:36.664 [2024-11-27 04:45:43.783284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.922 [2024-11-27 04:45:43.883053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 xnvme_bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69066 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69066 ']' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69066 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69066 00:23:37.492 killing process with pid 69066 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69066' 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69066 00:23:37.492 04:45:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69066 00:23:39.391 ************************************ 00:23:39.391 END TEST xnvme_rpc 00:23:39.391 ************************************ 00:23:39.391 00:23:39.391 real 0m2.640s 00:23:39.391 user 0m2.737s 00:23:39.391 sys 0m0.343s 00:23:39.391 04:45:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.391 04:45:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:39.391 04:45:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:23:39.391 04:45:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.391 04:45:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.391 04:45:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:39.391 ************************************ 00:23:39.391 START TEST xnvme_bdevperf 00:23:39.391 ************************************ 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:39.391 04:45:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:39.391 { 00:23:39.391 "subsystems": [ 00:23:39.391 { 00:23:39.391 "subsystem": "bdev", 00:23:39.391 "config": [ 00:23:39.391 { 00:23:39.391 "params": { 00:23:39.391 "io_mechanism": "libaio", 00:23:39.391 "conserve_cpu": false, 00:23:39.391 "filename": "/dev/nvme0n1", 00:23:39.391 "name": "xnvme_bdev" 00:23:39.391 }, 00:23:39.391 "method": "bdev_xnvme_create" 00:23:39.391 }, 00:23:39.391 { 00:23:39.391 "method": "bdev_wait_for_examine" 00:23:39.391 } 00:23:39.391 ] 00:23:39.391 } 00:23:39.391 ] 00:23:39.391 } 00:23:39.391 [2024-11-27 04:45:46.287116] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:23:39.391 [2024-11-27 04:45:46.287229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69134 ] 00:23:39.391 [2024-11-27 04:45:46.445185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.391 [2024-11-27 04:45:46.544465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.648 Running I/O for 5 seconds... 00:23:41.667 36769.00 IOPS, 143.63 MiB/s [2024-11-27T04:45:50.243Z] 36057.50 IOPS, 140.85 MiB/s [2024-11-27T04:45:50.811Z] 35069.67 IOPS, 136.99 MiB/s [2024-11-27T04:45:52.196Z] 35589.00 IOPS, 139.02 MiB/s [2024-11-27T04:45:52.196Z] 34209.20 IOPS, 133.63 MiB/s 00:23:44.993 Latency(us) 00:23:44.993 [2024-11-27T04:45:52.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.993 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:23:44.993 xnvme_bdev : 5.01 34159.08 133.43 0.00 0.00 1868.97 345.01 9779.99 00:23:44.993 [2024-11-27T04:45:52.196Z] =================================================================================================================== 00:23:44.993 [2024-11-27T04:45:52.196Z] Total : 34159.08 133.43 0.00 0.00 1868.97 345.01 9779.99 00:23:45.565 04:45:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:45.565 04:45:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:45.565 04:45:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:45.565 04:45:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:23:45.565 04:45:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:45.565 { 00:23:45.565 "subsystems": [ 00:23:45.565 { 00:23:45.565 "subsystem": "bdev", 00:23:45.565 "config": [ 00:23:45.565 { 00:23:45.565 "params": { 00:23:45.565 "io_mechanism": "libaio", 00:23:45.565 "conserve_cpu": false, 00:23:45.565 "filename": "/dev/nvme0n1", 00:23:45.565 "name": "xnvme_bdev" 00:23:45.565 }, 00:23:45.565 "method": "bdev_xnvme_create" 00:23:45.565 }, 00:23:45.565 { 00:23:45.565 "method": "bdev_wait_for_examine" 00:23:45.565 } 00:23:45.565 ] 00:23:45.565 } 00:23:45.565 ] 00:23:45.565 } 00:23:45.565 [2024-11-27 04:45:52.612926] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:23:45.565 [2024-11-27 04:45:52.613057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69214 ] 00:23:45.826 [2024-11-27 04:45:52.775397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.826 [2024-11-27 04:45:52.878158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.087 Running I/O for 5 seconds... 00:23:47.965 34745.00 IOPS, 135.72 MiB/s [2024-11-27T04:45:56.552Z] 19933.50 IOPS, 77.87 MiB/s [2024-11-27T04:45:57.492Z] 14206.67 IOPS, 55.49 MiB/s [2024-11-27T04:45:58.489Z] 11372.25 IOPS, 44.42 MiB/s [2024-11-27T04:45:58.489Z] 9734.00 IOPS, 38.02 MiB/s 00:23:51.286 Latency(us) 00:23:51.286 [2024-11-27T04:45:58.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.286 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:23:51.286 xnvme_bdev : 5.02 9714.59 37.95 0.00 0.00 6572.18 52.38 80659.69 00:23:51.286 [2024-11-27T04:45:58.489Z] =================================================================================================================== 00:23:51.286 [2024-11-27T04:45:58.490Z] Total : 9714.59 37.95 0.00 0.00 6572.18 52.38 80659.69 00:23:51.860 00:23:51.860 real 0m12.668s 00:23:51.860 user 0m7.343s 00:23:51.860 sys 0m3.890s 00:23:51.860 04:45:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.860 04:45:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:51.860 ************************************ 00:23:51.860 END TEST xnvme_bdevperf 00:23:51.860 ************************************ 00:23:51.860 04:45:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:23:51.860 04:45:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:51.860 04:45:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.860 04:45:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:51.860 ************************************ 00:23:51.860 START TEST xnvme_fio_plugin 00:23:51.860 ************************************ 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:51.860 04:45:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:51.860 { 00:23:51.860 "subsystems": [ 00:23:51.860 { 00:23:51.860 "subsystem": "bdev", 00:23:51.860 "config": [ 00:23:51.860 { 00:23:51.860 "params": { 00:23:51.860 "io_mechanism": "libaio", 00:23:51.860 "conserve_cpu": false, 00:23:51.860 "filename": "/dev/nvme0n1", 00:23:51.860 "name": "xnvme_bdev" 00:23:51.860 }, 00:23:51.860 "method": "bdev_xnvme_create" 00:23:51.860 }, 00:23:51.860 { 00:23:51.860 "method": "bdev_wait_for_examine" 00:23:51.860 } 00:23:51.860 ] 00:23:51.860 } 00:23:51.860 ] 00:23:51.860 } 00:23:52.128 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:52.128 fio-3.35 00:23:52.128 Starting 1 thread 00:23:58.712 00:23:58.712 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69326: Wed Nov 27 04:46:04 2024 00:23:58.712 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(694MiB/5001msec) 00:23:58.712 slat (usec): min=4, max=1672, avg=21.42, stdev=72.74 00:23:58.712 clat (usec): min=42, max=8608, avg=1249.64, stdev=635.48 00:23:58.712 lat (usec): min=115, max=8633, avg=1271.06, stdev=634.96 00:23:58.712 clat percentiles (usec): 00:23:58.712 | 1.00th=[ 233], 5.00th=[ 392], 10.00th=[ 537], 20.00th=[ 725], 00:23:58.712 | 30.00th=[ 881], 40.00th=[ 1020], 50.00th=[ 1156], 60.00th=[ 1303], 00:23:58.712 | 70.00th=[ 1467], 80.00th=[ 1696], 90.00th=[ 2089], 95.00th=[ 2474], 00:23:58.712 | 99.00th=[ 3163], 99.50th=[ 3458], 99.90th=[ 4293], 99.95th=[ 5211], 00:23:58.712 | 99.99th=[ 6849] 00:23:58.712 bw ( KiB/s): min=130584, max=157264, per=100.00%, avg=142625.78, stdev=8979.78, samples=9 00:23:58.712 iops : min=32646, max=39316, avg=35656.44, stdev=2244.94, samples=9 00:23:58.712 lat (usec) : 50=0.01%, 100=0.01%, 250=1.33%, 500=7.39%, 750=12.70% 00:23:58.712 lat (usec) : 1000=17.05% 00:23:58.712 lat (msec) : 2=50.02%, 4=11.35%, 10=0.17% 00:23:58.712 cpu : usr=37.82%, sys=49.44%, ctx=35, majf=0, minf=764 00:23:58.713 IO depths : 1=0.2%, 2=0.6%, 4=2.3%, 8=7.4%, 16=23.3%, 32=64.0%, >=64=2.3% 00:23:58.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.713 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:23:58.713 issued rwts: total=177786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:58.713 00:23:58.713 Run status group 0 (all jobs): 00:23:58.713 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=694MiB (728MB), run=5001-5001msec 00:23:58.713 ----------------------------------------------------- 00:23:58.713 Suppressions used: 00:23:58.713 count bytes template 00:23:58.713 1 11 /usr/src/fio/parse.c 00:23:58.713 1 8 libtcmalloc_minimal.so 00:23:58.713 1 904 libcrypto.so 00:23:58.713 ----------------------------------------------------- 00:23:58.713 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.713 04:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:58.713 { 00:23:58.713 "subsystems": [ 00:23:58.713 { 00:23:58.713 "subsystem": "bdev", 00:23:58.713 "config": [ 00:23:58.713 { 00:23:58.713 "params": { 00:23:58.713 "io_mechanism": "libaio", 00:23:58.713 "conserve_cpu": false, 00:23:58.713 "filename": "/dev/nvme0n1", 00:23:58.713 "name": "xnvme_bdev" 00:23:58.713 }, 00:23:58.713 "method": "bdev_xnvme_create" 00:23:58.713 }, 00:23:58.713 { 00:23:58.713 "method": "bdev_wait_for_examine" 00:23:58.713 } 00:23:58.713 ] 00:23:58.713 } 00:23:58.713 ] 00:23:58.713 } 00:23:58.976 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:58.976 fio-3.35 00:23:58.976 Starting 1 thread 00:24:05.564 00:24:05.564 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69418: Wed Nov 27 04:46:11 2024 00:24:05.564 write: IOPS=25.6k, BW=99.9MiB/s (105MB/s)(501MiB/5012msec); 0 zone resets 00:24:05.564 slat (usec): min=4, max=1682, avg=22.56, stdev=69.21 00:24:05.564 clat (usec): min=10, max=81679, avg=1914.42, stdev=3518.39 00:24:05.564 lat (usec): min=70, max=81684, avg=1936.99, stdev=3515.88 00:24:05.564 clat percentiles (usec): 00:24:05.564 | 1.00th=[ 223], 5.00th=[ 404], 10.00th=[ 537], 20.00th=[ 750], 00:24:05.564 | 30.00th=[ 922], 40.00th=[ 1074], 50.00th=[ 1205], 60.00th=[ 1369], 00:24:05.564 | 70.00th=[ 1549], 80.00th=[ 1811], 90.00th=[ 2278], 95.00th=[ 3458], 00:24:05.564 | 99.00th=[16319], 99.50th=[17433], 99.90th=[28443], 99.95th=[55313], 00:24:05.564 | 99.99th=[80217] 00:24:05.564 bw ( KiB/s): min=37784, max=134826, per=100.00%, avg=102424.90, stdev=42560.76, samples=10 00:24:05.564 iops : min= 9446, max=33706, avg=25606.10, stdev=10640.09, samples=10 00:24:05.564 lat (usec) : 20=0.01%, 50=0.01%, 100=0.05%, 250=1.22%, 500=7.32% 00:24:05.564 lat (usec) : 750=11.42%, 1000=15.55% 00:24:05.564 lat (msec) : 2=50.05%, 4=9.58%, 10=0.22%, 20=4.43%, 50=0.07% 00:24:05.564 lat (msec) : 100=0.09% 00:24:05.564 cpu : usr=50.41%, sys=35.70%, ctx=52, majf=0, minf=765 00:24:05.564 IO depths : 1=0.2%, 2=0.8%, 4=3.0%, 8=8.6%, 16=21.6%, 32=61.6%, >=64=4.1% 00:24:05.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.564 complete : 0=0.0%, 4=97.4%, 8=0.5%, 16=0.4%, 32=0.2%, 64=1.4%, >=64=0.0% 00:24:05.564 issued rwts: total=0,128190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.564 00:24:05.564 Run status group 0 (all jobs): 00:24:05.564 WRITE: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=501MiB (525MB), run=5012-5012msec 00:24:05.825 ----------------------------------------------------- 00:24:05.825 Suppressions used: 00:24:05.825 count bytes template 00:24:05.825 1 11 /usr/src/fio/parse.c 00:24:05.825 1 8 libtcmalloc_minimal.so 00:24:05.825 1 904 libcrypto.so 00:24:05.825 ----------------------------------------------------- 00:24:05.825 00:24:05.825 00:24:05.825 real 0m13.884s 00:24:05.825 user 0m7.300s 00:24:05.825 sys 0m4.840s 00:24:05.825 ************************************ 00:24:05.825 END TEST xnvme_fio_plugin 00:24:05.825 ************************************ 00:24:05.825 04:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.825 04:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:05.825 04:46:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:24:05.825 04:46:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:24:05.825 04:46:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:24:05.825 04:46:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:24:05.825 04:46:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.825 04:46:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.825 04:46:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:05.825 ************************************ 00:24:05.825 START TEST xnvme_rpc 00:24:05.825 ************************************ 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:24:05.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69510 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69510 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69510 ']' 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.825 04:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.825 [2024-11-27 04:46:13.012226] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:05.825 [2024-11-27 04:46:13.012563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69510 ] 00:24:06.085 [2024-11-27 04:46:13.172552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.347 [2024-11-27 04:46:13.313625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.919 xnvme_bdev 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.919 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69510 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69510 ']' 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69510 00:24:07.180 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69510 00:24:07.181 killing process with pid 69510 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69510' 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69510 00:24:07.181 04:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69510 00:24:09.097 ************************************ 00:24:09.097 END TEST xnvme_rpc 00:24:09.097 ************************************ 00:24:09.097 00:24:09.097 real 0m3.027s 00:24:09.097 user 0m3.010s 00:24:09.097 sys 0m0.510s 00:24:09.097 04:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.097 04:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 04:46:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:09.097 04:46:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:09.097 04:46:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.097 04:46:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 ************************************ 00:24:09.097 START TEST xnvme_bdevperf 00:24:09.097 ************************************ 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:09.097 04:46:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 { 00:24:09.097 "subsystems": [ 00:24:09.097 { 00:24:09.097 "subsystem": "bdev", 00:24:09.097 "config": [ 00:24:09.097 { 00:24:09.097 "params": { 00:24:09.097 "io_mechanism": "libaio", 00:24:09.097 "conserve_cpu": true, 00:24:09.097 "filename": "/dev/nvme0n1", 00:24:09.097 "name": "xnvme_bdev" 00:24:09.097 }, 00:24:09.097 "method": "bdev_xnvme_create" 00:24:09.097 }, 00:24:09.097 { 00:24:09.097 "method": "bdev_wait_for_examine" 00:24:09.097 } 00:24:09.097 ] 00:24:09.097 } 00:24:09.097 ] 00:24:09.097 } 00:24:09.097 [2024-11-27 04:46:16.101369] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:09.097 [2024-11-27 04:46:16.101538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:24:09.097 [2024-11-27 04:46:16.265395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.358 [2024-11-27 04:46:16.404054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.618 Running I/O for 5 seconds... 00:24:11.957 29475.00 IOPS, 115.14 MiB/s [2024-11-27T04:46:19.732Z] 29400.00 IOPS, 114.84 MiB/s [2024-11-27T04:46:21.118Z] 30144.33 IOPS, 117.75 MiB/s [2024-11-27T04:46:22.063Z] 30812.50 IOPS, 120.36 MiB/s [2024-11-27T04:46:22.063Z] 30993.00 IOPS, 121.07 MiB/s 00:24:14.860 Latency(us) 00:24:14.860 [2024-11-27T04:46:22.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.860 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:14.860 xnvme_bdev : 5.01 30973.49 120.99 0.00 0.00 2061.79 69.71 26617.70 00:24:14.860 [2024-11-27T04:46:22.063Z] =================================================================================================================== 00:24:14.860 [2024-11-27T04:46:22.063Z] Total : 30973.49 120.99 0.00 0.00 2061.79 69.71 26617.70 00:24:15.432 04:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:15.432 04:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:24:15.432 04:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:15.432 04:46:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:15.432 04:46:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 { 00:24:15.432 "subsystems": [ 00:24:15.432 { 00:24:15.432 "subsystem": "bdev", 00:24:15.432 "config": [ 00:24:15.432 { 00:24:15.432 "params": { 00:24:15.432 "io_mechanism": "libaio", 00:24:15.432 "conserve_cpu": true, 00:24:15.432 "filename": "/dev/nvme0n1", 00:24:15.432 "name": "xnvme_bdev" 00:24:15.432 }, 00:24:15.432 "method": "bdev_xnvme_create" 00:24:15.432 }, 00:24:15.432 { 00:24:15.432 "method": "bdev_wait_for_examine" 00:24:15.432 } 00:24:15.432 ] 00:24:15.432 } 00:24:15.432 ] 00:24:15.432 } 00:24:15.750 [2024-11-27 04:46:22.670547] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:15.750 [2024-11-27 04:46:22.671102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69665 ] 00:24:15.750 [2024-11-27 04:46:22.844257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.011 [2024-11-27 04:46:22.982004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.272 Running I/O for 5 seconds... 00:24:18.218 2813.00 IOPS, 10.99 MiB/s [2024-11-27T04:46:26.363Z] 2888.00 IOPS, 11.28 MiB/s [2024-11-27T04:46:27.750Z] 2769.00 IOPS, 10.82 MiB/s [2024-11-27T04:46:28.323Z] 4614.25 IOPS, 18.02 MiB/s [2024-11-27T04:46:28.323Z] 5446.80 IOPS, 21.28 MiB/s 00:24:21.120 Latency(us) 00:24:21.120 [2024-11-27T04:46:28.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.120 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:24:21.120 xnvme_bdev : 5.00 5467.56 21.36 0.00 0.00 11700.38 53.17 403298.46 00:24:21.120 [2024-11-27T04:46:28.323Z] =================================================================================================================== 00:24:21.120 [2024-11-27T04:46:28.323Z] Total : 5467.56 21.36 0.00 0.00 11700.38 53.17 403298.46 00:24:22.065 00:24:22.065 real 0m13.113s 00:24:22.065 user 0m8.108s 00:24:22.065 sys 0m3.784s 00:24:22.065 04:46:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.065 ************************************ 00:24:22.065 END TEST xnvme_bdevperf 00:24:22.065 ************************************ 00:24:22.065 04:46:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:22.065 04:46:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:24:22.065 04:46:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:22.065 04:46:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.065 04:46:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:22.065 ************************************ 00:24:22.065 START TEST xnvme_fio_plugin 00:24:22.065 ************************************ 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:22.065 04:46:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:22.065 { 00:24:22.065 "subsystems": [ 00:24:22.065 { 00:24:22.065 "subsystem": "bdev", 00:24:22.065 "config": [ 00:24:22.065 { 00:24:22.065 "params": { 00:24:22.065 "io_mechanism": "libaio", 00:24:22.065 "conserve_cpu": true, 00:24:22.065 "filename": "/dev/nvme0n1", 00:24:22.065 "name": "xnvme_bdev" 00:24:22.065 }, 00:24:22.065 "method": "bdev_xnvme_create" 00:24:22.065 }, 00:24:22.065 { 00:24:22.065 "method": "bdev_wait_for_examine" 00:24:22.065 } 00:24:22.065 ] 00:24:22.065 } 00:24:22.065 ] 00:24:22.065 } 00:24:22.328 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:22.328 fio-3.35 00:24:22.328 Starting 1 thread 00:24:28.920 00:24:28.920 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69784: Wed Nov 27 04:46:35 2024 00:24:28.920 read: IOPS=32.8k, BW=128MiB/s (134MB/s)(641MiB/5001msec) 00:24:28.920 slat (usec): min=4, max=2156, avg=22.13, stdev=99.28 00:24:28.920 clat (usec): min=104, max=172041, avg=1368.58, stdev=1498.95 00:24:28.920 lat (usec): min=184, max=172046, avg=1390.71, stdev=1495.75 00:24:28.920 clat percentiles (usec): 00:24:28.920 | 1.00th=[ 281], 5.00th=[ 515], 10.00th=[ 693], 20.00th=[ 922], 00:24:28.920 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1319], 60.00th=[ 1450], 00:24:28.920 | 70.00th=[ 1582], 80.00th=[ 1745], 90.00th=[ 1991], 95.00th=[ 2278], 00:24:28.920 | 99.00th=[ 2999], 99.50th=[ 3326], 99.90th=[ 4113], 99.95th=[ 4555], 00:24:28.920 | 99.99th=[77071] 00:24:28.920 bw ( KiB/s): min=113916, max=143880, per=99.43%, avg=130413.11, stdev=8717.04, samples=9 00:24:28.920 iops : min=28479, max=35970, avg=32603.22, stdev=2179.21, samples=9 00:24:28.920 lat (usec) : 250=0.68%, 500=4.06%, 750=7.43%, 1000=12.80% 00:24:28.920 lat (msec) : 2=65.15%, 4=9.77%, 10=0.09%, 20=0.01%, 50=0.01% 00:24:28.920 lat (msec) : 100=0.01%, 250=0.01% 00:24:28.920 cpu : usr=41.38%, sys=50.24%, ctx=15, majf=0, minf=764 00:24:28.920 IO depths : 1=0.3%, 2=1.0%, 4=3.1%, 8=8.6%, 16=23.5%, 32=61.4%, >=64=2.1% 00:24:28.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.920 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:24:28.920 issued rwts: total=163985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:28.920 00:24:28.920 Run status group 0 (all jobs): 00:24:28.920 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=641MiB (672MB), run=5001-5001msec 00:24:29.181 ----------------------------------------------------- 00:24:29.181 Suppressions used: 00:24:29.181 count bytes template 00:24:29.181 1 11 /usr/src/fio/parse.c 00:24:29.181 1 8 libtcmalloc_minimal.so 00:24:29.181 1 904 libcrypto.so 00:24:29.181 ----------------------------------------------------- 00:24:29.181 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:29.181 04:46:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:29.181 { 00:24:29.181 "subsystems": [ 00:24:29.181 { 00:24:29.181 "subsystem": "bdev", 00:24:29.181 "config": [ 00:24:29.181 { 00:24:29.181 "params": { 00:24:29.181 "io_mechanism": "libaio", 00:24:29.181 "conserve_cpu": true, 00:24:29.181 "filename": "/dev/nvme0n1", 00:24:29.181 "name": "xnvme_bdev" 00:24:29.181 }, 00:24:29.181 "method": "bdev_xnvme_create" 00:24:29.181 }, 00:24:29.181 { 00:24:29.181 "method": "bdev_wait_for_examine" 00:24:29.181 } 00:24:29.181 ] 00:24:29.181 } 00:24:29.181 ] 00:24:29.181 } 00:24:29.443 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:29.443 fio-3.35 00:24:29.443 Starting 1 thread 00:24:36.033 00:24:36.033 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69876: Wed Nov 27 04:46:42 2024 00:24:36.033 write: IOPS=29.1k, BW=114MiB/s (119MB/s)(569MiB/5006msec); 0 zone resets 00:24:36.033 slat (usec): min=4, max=1896, avg=20.99, stdev=80.30 00:24:36.033 clat (usec): min=9, max=16389, avg=1650.07, stdev=2037.66 00:24:36.033 lat (usec): min=63, max=16394, avg=1671.05, stdev=2034.20 00:24:36.033 clat percentiles (usec): 00:24:36.033 | 1.00th=[ 217], 5.00th=[ 388], 10.00th=[ 537], 20.00th=[ 725], 00:24:36.033 | 30.00th=[ 881], 40.00th=[ 1037], 50.00th=[ 1172], 60.00th=[ 1336], 00:24:36.033 | 70.00th=[ 1516], 80.00th=[ 1762], 90.00th=[ 2278], 95.00th=[ 5866], 00:24:36.033 | 99.00th=[11600], 99.50th=[12518], 99.90th=[14091], 99.95th=[14615], 00:24:36.033 | 99.99th=[15401] 00:24:36.033 bw ( KiB/s): min=52295, max=147688, per=100.00%, avg=122763.11, stdev=31037.69, samples=9 00:24:36.033 iops : min=13073, max=36922, avg=30690.67, stdev=7759.64, samples=9 00:24:36.033 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.10%, 250=1.42% 00:24:36.033 lat (usec) : 500=7.12%, 750=12.96%, 1000=16.14% 00:24:36.033 lat (msec) : 2=48.60%, 4=8.37%, 10=2.98%, 20=2.30% 00:24:36.033 cpu : usr=49.87%, sys=40.74%, ctx=34, majf=0, minf=765 00:24:36.033 IO depths : 1=0.3%, 2=1.0%, 4=2.9%, 8=8.4%, 16=21.4%, 32=62.6%, >=64=3.4% 00:24:36.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.033 complete : 0=0.0%, 4=97.5%, 8=0.4%, 16=0.4%, 32=0.3%, 64=1.4%, >=64=0.0% 00:24:36.033 issued rwts: total=0,145609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.033 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:36.033 00:24:36.033 Run status group 0 (all jobs): 00:24:36.033 WRITE: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=569MiB (596MB), run=5006-5006msec 00:24:36.033 ----------------------------------------------------- 00:24:36.033 Suppressions used: 00:24:36.033 count bytes template 00:24:36.033 1 11 /usr/src/fio/parse.c 00:24:36.033 1 8 libtcmalloc_minimal.so 00:24:36.033 1 904 libcrypto.so 00:24:36.033 ----------------------------------------------------- 00:24:36.033 00:24:36.033 00:24:36.033 real 0m13.875s 00:24:36.033 user 0m7.385s 00:24:36.033 sys 0m5.214s 00:24:36.033 04:46:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.033 ************************************ 00:24:36.033 END TEST xnvme_fio_plugin 00:24:36.033 ************************************ 00:24:36.033 04:46:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:24:36.033 04:46:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:24:36.033 04:46:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:36.033 04:46:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.033 04:46:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:36.033 ************************************ 00:24:36.033 START TEST xnvme_rpc 00:24:36.033 ************************************ 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:24:36.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69962 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69962 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69962 ']' 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:36.033 04:46:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:36.294 [2024-11-27 04:46:43.233477] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:36.294 [2024-11-27 04:46:43.233604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69962 ] 00:24:36.294 [2024-11-27 04:46:43.395806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.556 [2024-11-27 04:46:43.496886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 xnvme_bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69962 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69962 ']' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69962 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69962 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69962' 00:24:37.186 killing process with pid 69962 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69962 00:24:37.186 04:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69962 00:24:39.102 00:24:39.102 real 0m2.621s 00:24:39.102 user 0m2.747s 00:24:39.102 sys 0m0.358s 00:24:39.102 ************************************ 00:24:39.102 END TEST xnvme_rpc 00:24:39.102 ************************************ 00:24:39.102 04:46:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.102 04:46:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.102 04:46:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:39.102 04:46:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:39.102 04:46:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.102 04:46:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.102 ************************************ 00:24:39.102 START TEST xnvme_bdevperf 00:24:39.102 ************************************ 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:39.102 04:46:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:39.102 { 00:24:39.102 "subsystems": [ 00:24:39.102 { 00:24:39.102 "subsystem": "bdev", 00:24:39.102 "config": [ 00:24:39.102 { 00:24:39.102 "params": { 00:24:39.102 "io_mechanism": "io_uring", 00:24:39.102 "conserve_cpu": false, 00:24:39.102 "filename": "/dev/nvme0n1", 00:24:39.102 "name": "xnvme_bdev" 00:24:39.102 }, 00:24:39.102 "method": "bdev_xnvme_create" 00:24:39.102 }, 00:24:39.102 { 00:24:39.102 "method": "bdev_wait_for_examine" 00:24:39.102 } 00:24:39.102 ] 00:24:39.102 } 00:24:39.102 ] 00:24:39.102 } 00:24:39.102 [2024-11-27 04:46:45.908787] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:39.102 [2024-11-27 04:46:45.909062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70025 ] 00:24:39.102 [2024-11-27 04:46:46.068722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.102 [2024-11-27 04:46:46.170315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.363 Running I/O for 5 seconds... 00:24:41.256 36726.00 IOPS, 143.46 MiB/s [2024-11-27T04:46:49.849Z] 36002.00 IOPS, 140.63 MiB/s [2024-11-27T04:46:50.793Z] 35800.33 IOPS, 139.85 MiB/s [2024-11-27T04:46:51.733Z] 36241.00 IOPS, 141.57 MiB/s [2024-11-27T04:46:51.733Z] 36361.60 IOPS, 142.04 MiB/s 00:24:44.530 Latency(us) 00:24:44.530 [2024-11-27T04:46:51.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.531 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:44.531 xnvme_bdev : 5.00 36329.34 141.91 0.00 0.00 1756.44 89.40 31457.28 00:24:44.531 [2024-11-27T04:46:51.734Z] =================================================================================================================== 00:24:44.531 [2024-11-27T04:46:51.734Z] Total : 36329.34 141.91 0.00 0.00 1756.44 89.40 31457.28 00:24:45.099 04:46:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:45.099 04:46:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:24:45.099 04:46:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:45.099 04:46:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:45.099 04:46:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.099 { 00:24:45.099 "subsystems": [ 00:24:45.099 { 00:24:45.099 "subsystem": "bdev", 00:24:45.099 "config": [ 00:24:45.099 { 00:24:45.099 "params": { 00:24:45.099 "io_mechanism": "io_uring", 00:24:45.099 "conserve_cpu": false, 00:24:45.099 "filename": "/dev/nvme0n1", 00:24:45.099 "name": "xnvme_bdev" 00:24:45.099 }, 00:24:45.099 "method": "bdev_xnvme_create" 00:24:45.099 }, 00:24:45.099 { 00:24:45.099 "method": "bdev_wait_for_examine" 00:24:45.099 } 00:24:45.099 ] 00:24:45.099 } 00:24:45.099 ] 00:24:45.099 } 00:24:45.099 [2024-11-27 04:46:52.225217] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:24:45.099 [2024-11-27 04:46:52.225494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70106 ] 00:24:45.360 [2024-11-27 04:46:52.390804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.360 [2024-11-27 04:46:52.494711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.623 Running I/O for 5 seconds... 00:24:47.954 5142.00 IOPS, 20.09 MiB/s [2024-11-27T04:46:56.101Z] 5023.00 IOPS, 19.62 MiB/s [2024-11-27T04:46:57.081Z] 5168.67 IOPS, 20.19 MiB/s [2024-11-27T04:46:58.022Z] 5255.00 IOPS, 20.53 MiB/s [2024-11-27T04:46:58.022Z] 5161.00 IOPS, 20.16 MiB/s 00:24:50.819 Latency(us) 00:24:50.819 [2024-11-27T04:46:58.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.819 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:24:50.819 xnvme_bdev : 5.02 5157.06 20.14 0.00 0.00 12387.10 55.93 38313.35 00:24:50.819 [2024-11-27T04:46:58.022Z] =================================================================================================================== 00:24:50.819 [2024-11-27T04:46:58.022Z] Total : 5157.06 20.14 0.00 0.00 12387.10 55.93 38313.35 00:24:51.388 00:24:51.388 real 0m12.663s 00:24:51.388 user 0m5.975s 00:24:51.388 sys 0m6.440s 00:24:51.388 04:46:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.388 04:46:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.388 ************************************ 00:24:51.388 END TEST xnvme_bdevperf 00:24:51.388 ************************************ 00:24:51.389 04:46:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:24:51.389 04:46:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:51.389 04:46:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.389 04:46:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:51.389 ************************************ 00:24:51.389 START TEST xnvme_fio_plugin 00:24:51.389 ************************************ 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:51.389 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:51.650 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:51.650 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:51.650 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:51.650 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:51.650 04:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:51.650 { 00:24:51.650 "subsystems": [ 00:24:51.650 { 00:24:51.650 "subsystem": "bdev", 00:24:51.650 "config": [ 00:24:51.650 { 00:24:51.650 "params": { 00:24:51.650 "io_mechanism": "io_uring", 00:24:51.650 "conserve_cpu": false, 00:24:51.650 "filename": "/dev/nvme0n1", 00:24:51.650 "name": "xnvme_bdev" 00:24:51.650 }, 00:24:51.650 "method": "bdev_xnvme_create" 00:24:51.650 }, 00:24:51.650 { 00:24:51.650 "method": "bdev_wait_for_examine" 00:24:51.650 } 00:24:51.650 ] 00:24:51.650 } 00:24:51.650 ] 00:24:51.650 } 00:24:51.650 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:51.650 fio-3.35 00:24:51.650 Starting 1 thread 00:24:58.258 00:24:58.259 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70220: Wed Nov 27 04:47:04 2024 00:24:58.259 read: IOPS=37.5k, BW=146MiB/s (153MB/s)(732MiB/5002msec) 00:24:58.259 slat (usec): min=2, max=148, avg= 3.95, stdev= 2.35 00:24:58.259 clat (usec): min=258, max=8870, avg=1550.24, stdev=323.94 00:24:58.259 lat (usec): min=261, max=8873, avg=1554.19, stdev=324.33 00:24:58.259 clat percentiles (usec): 00:24:58.259 | 1.00th=[ 914], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1270], 00:24:58.259 | 30.00th=[ 1385], 40.00th=[ 1467], 50.00th=[ 1549], 60.00th=[ 1614], 00:24:58.259 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2057], 00:24:58.259 | 99.00th=[ 2376], 99.50th=[ 2638], 99.90th=[ 3359], 99.95th=[ 3785], 00:24:58.259 | 99.99th=[ 5014] 00:24:58.259 bw ( KiB/s): min=145904, max=156160, per=100.00%, avg=150696.89, stdev=3465.87, samples=9 00:24:58.259 iops : min=36476, max=39040, avg=37674.22, stdev=866.47, samples=9 00:24:58.259 lat (usec) : 500=0.01%, 750=0.06%, 1000=2.76% 00:24:58.259 lat (msec) : 2=90.21%, 4=6.93%, 10=0.03% 00:24:58.259 cpu : usr=33.25%, sys=65.45%, ctx=16, majf=0, minf=762 00:24:58.259 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.9%, 32=50.7%, >=64=1.6% 00:24:58.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:58.259 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:24:58.259 issued rwts: total=187346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:58.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:58.259 00:24:58.259 Run status group 0 (all jobs): 00:24:58.259 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=732MiB (767MB), run=5002-5002msec 00:24:58.259 ----------------------------------------------------- 00:24:58.259 Suppressions used: 00:24:58.259 count bytes template 00:24:58.259 1 11 /usr/src/fio/parse.c 00:24:58.259 1 8 libtcmalloc_minimal.so 00:24:58.259 1 904 libcrypto.so 00:24:58.259 ----------------------------------------------------- 00:24:58.259 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:58.259 04:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:58.259 { 00:24:58.259 "subsystems": [ 00:24:58.259 { 00:24:58.259 "subsystem": "bdev", 00:24:58.259 "config": [ 00:24:58.259 { 00:24:58.259 "params": { 00:24:58.259 "io_mechanism": "io_uring", 00:24:58.259 "conserve_cpu": false, 00:24:58.259 "filename": "/dev/nvme0n1", 00:24:58.259 "name": "xnvme_bdev" 00:24:58.259 }, 00:24:58.259 "method": "bdev_xnvme_create" 00:24:58.259 }, 00:24:58.259 { 00:24:58.259 "method": "bdev_wait_for_examine" 00:24:58.259 } 00:24:58.259 ] 00:24:58.259 } 00:24:58.259 ] 00:24:58.259 } 00:24:58.548 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:58.548 fio-3.35 00:24:58.548 Starting 1 thread 00:25:05.113 00:25:05.113 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70311: Wed Nov 27 04:47:11 2024 00:25:05.113 write: IOPS=26.7k, BW=104MiB/s (109MB/s)(522MiB/5001msec); 0 zone resets 00:25:05.113 slat (nsec): min=2890, max=62365, avg=4052.13, stdev=2343.62 00:25:05.113 clat (usec): min=73, max=486290, avg=2241.31, stdev=14289.15 00:25:05.113 lat (usec): min=76, max=486293, avg=2245.37, stdev=14289.13 00:25:05.113 clat percentiles (usec): 00:25:05.113 | 1.00th=[ 807], 5.00th=[ 1057], 10.00th=[ 1172], 20.00th=[ 1319], 00:25:05.113 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1663], 00:25:05.113 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2040], 95.00th=[ 2245], 00:25:05.113 | 99.00th=[ 5932], 99.50th=[ 8225], 99.90th=[177210], 99.95th=[379585], 00:25:05.113 | 99.99th=[484443] 00:25:05.113 bw ( KiB/s): min= 9768, max=147968, per=97.98%, avg=104688.89, stdev=57149.84, samples=9 00:25:05.113 iops : min= 2442, max=36992, avg=26172.22, stdev=14287.46, samples=9 00:25:05.113 lat (usec) : 100=0.01%, 250=0.03%, 500=0.32%, 750=0.43%, 1000=2.66% 00:25:05.113 lat (msec) : 2=84.64%, 4=10.16%, 10=1.43%, 20=0.14%, 50=0.01% 00:25:05.113 lat (msec) : 250=0.10%, 500=0.10% 00:25:05.113 cpu : usr=31.22%, sys=67.84%, ctx=9, majf=0, minf=763 00:25:05.113 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.5%, 16=24.1%, 32=52.7%, >=64=1.9% 00:25:05.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.113 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:25:05.113 issued rwts: total=0,133586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:05.113 00:25:05.113 Run status group 0 (all jobs): 00:25:05.113 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=522MiB (547MB), run=5001-5001msec 00:25:05.113 ----------------------------------------------------- 00:25:05.113 Suppressions used: 00:25:05.113 count bytes template 00:25:05.113 1 11 /usr/src/fio/parse.c 00:25:05.113 1 8 libtcmalloc_minimal.so 00:25:05.113 1 904 libcrypto.so 00:25:05.113 ----------------------------------------------------- 00:25:05.113 00:25:05.113 00:25:05.113 real 0m13.494s 00:25:05.113 user 0m5.902s 00:25:05.113 sys 0m7.153s 00:25:05.113 04:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.113 ************************************ 00:25:05.113 END TEST xnvme_fio_plugin 00:25:05.113 04:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:05.113 ************************************ 00:25:05.113 04:47:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:25:05.113 04:47:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:25:05.113 04:47:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:25:05.113 04:47:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:25:05.113 04:47:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:05.113 04:47:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.114 04:47:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:05.114 ************************************ 00:25:05.114 START TEST xnvme_rpc 00:25:05.114 ************************************ 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70392 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70392 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70392 ']' 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.114 04:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:05.114 [2024-11-27 04:47:12.214249] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:05.114 [2024-11-27 04:47:12.214522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70392 ] 00:25:05.373 [2024-11-27 04:47:12.375164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.373 [2024-11-27 04:47:12.478575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:05.941 xnvme_bdev 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.941 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70392 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70392 ']' 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70392 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70392 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.201 killing process with pid 70392 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70392' 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70392 00:25:06.201 04:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70392 00:25:07.641 00:25:07.641 real 0m2.669s 00:25:07.641 user 0m2.776s 00:25:07.641 sys 0m0.367s 00:25:07.641 ************************************ 00:25:07.641 END TEST xnvme_rpc 00:25:07.641 ************************************ 00:25:07.641 04:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.641 04:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:07.901 04:47:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:25:07.901 04:47:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:07.901 04:47:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.901 04:47:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:07.901 ************************************ 00:25:07.901 START TEST xnvme_bdevperf 00:25:07.901 ************************************ 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:07.901 04:47:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.901 { 00:25:07.901 "subsystems": [ 00:25:07.901 { 00:25:07.901 "subsystem": "bdev", 00:25:07.901 "config": [ 00:25:07.901 { 00:25:07.901 "params": { 00:25:07.901 "io_mechanism": "io_uring", 00:25:07.901 "conserve_cpu": true, 00:25:07.901 "filename": "/dev/nvme0n1", 00:25:07.901 "name": "xnvme_bdev" 00:25:07.901 }, 00:25:07.901 "method": "bdev_xnvme_create" 00:25:07.901 }, 00:25:07.901 { 00:25:07.901 "method": "bdev_wait_for_examine" 00:25:07.901 } 00:25:07.901 ] 00:25:07.901 } 00:25:07.901 ] 00:25:07.901 } 00:25:07.901 [2024-11-27 04:47:14.934912] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:07.901 [2024-11-27 04:47:14.935037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70466 ] 00:25:07.901 [2024-11-27 04:47:15.091047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.163 [2024-11-27 04:47:15.195006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.424 Running I/O for 5 seconds... 00:25:10.304 36780.00 IOPS, 143.67 MiB/s [2024-11-27T04:47:18.889Z] 37048.00 IOPS, 144.72 MiB/s [2024-11-27T04:47:19.460Z] 37019.00 IOPS, 144.61 MiB/s [2024-11-27T04:47:20.844Z] 37202.75 IOPS, 145.32 MiB/s 00:25:13.641 Latency(us) 00:25:13.641 [2024-11-27T04:47:20.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.641 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:25:13.641 xnvme_bdev : 5.00 37902.22 148.06 0.00 0.00 1683.71 639.61 12098.95 00:25:13.641 [2024-11-27T04:47:20.844Z] =================================================================================================================== 00:25:13.641 [2024-11-27T04:47:20.844Z] Total : 37902.22 148.06 0.00 0.00 1683.71 639.61 12098.95 00:25:14.215 04:47:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:14.215 04:47:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:25:14.215 04:47:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:14.215 04:47:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:14.215 04:47:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.215 { 00:25:14.215 "subsystems": [ 00:25:14.215 { 00:25:14.215 "subsystem": "bdev", 00:25:14.215 "config": [ 00:25:14.215 { 00:25:14.215 "params": { 00:25:14.215 "io_mechanism": "io_uring", 00:25:14.215 "conserve_cpu": true, 00:25:14.215 "filename": "/dev/nvme0n1", 00:25:14.215 "name": "xnvme_bdev" 00:25:14.215 }, 00:25:14.215 "method": "bdev_xnvme_create" 00:25:14.215 }, 00:25:14.215 { 00:25:14.215 "method": "bdev_wait_for_examine" 00:25:14.215 } 00:25:14.215 ] 00:25:14.215 } 00:25:14.215 ] 00:25:14.215 } 00:25:14.215 [2024-11-27 04:47:21.262828] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:14.215 [2024-11-27 04:47:21.262985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70536 ] 00:25:14.476 [2024-11-27 04:47:21.435028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.476 [2024-11-27 04:47:21.536365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.737 Running I/O for 5 seconds... 00:25:16.622 39233.00 IOPS, 153.25 MiB/s [2024-11-27T04:47:25.208Z] 36480.00 IOPS, 142.50 MiB/s [2024-11-27T04:47:26.148Z] 35806.33 IOPS, 139.87 MiB/s [2024-11-27T04:47:27.088Z] 36146.75 IOPS, 141.20 MiB/s [2024-11-27T04:47:27.088Z] 35698.20 IOPS, 139.45 MiB/s 00:25:19.885 Latency(us) 00:25:19.885 [2024-11-27T04:47:27.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.885 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:25:19.885 xnvme_bdev : 5.00 35681.18 139.38 0.00 0.00 1787.49 425.35 14317.10 00:25:19.885 [2024-11-27T04:47:27.088Z] =================================================================================================================== 00:25:19.885 [2024-11-27T04:47:27.088Z] Total : 35681.18 139.38 0.00 0.00 1787.49 425.35 14317.10 00:25:20.566 00:25:20.566 real 0m12.647s 00:25:20.566 user 0m8.177s 00:25:20.566 sys 0m3.825s 00:25:20.566 04:47:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.566 ************************************ 00:25:20.566 04:47:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:20.566 END TEST xnvme_bdevperf 00:25:20.566 ************************************ 00:25:20.566 04:47:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:25:20.566 04:47:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:20.566 04:47:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.566 04:47:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:20.566 ************************************ 00:25:20.566 START TEST xnvme_fio_plugin 00:25:20.566 ************************************ 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.566 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:20.567 04:47:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:20.567 { 00:25:20.567 "subsystems": [ 00:25:20.567 { 00:25:20.567 "subsystem": "bdev", 00:25:20.567 "config": [ 00:25:20.567 { 00:25:20.567 "params": { 00:25:20.567 "io_mechanism": "io_uring", 00:25:20.567 "conserve_cpu": true, 00:25:20.567 "filename": "/dev/nvme0n1", 00:25:20.567 "name": "xnvme_bdev" 00:25:20.567 }, 00:25:20.567 "method": "bdev_xnvme_create" 00:25:20.567 }, 00:25:20.567 { 00:25:20.567 "method": "bdev_wait_for_examine" 00:25:20.567 } 00:25:20.567 ] 00:25:20.567 } 00:25:20.567 ] 00:25:20.567 } 00:25:20.826 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:20.826 fio-3.35 00:25:20.826 Starting 1 thread 00:25:27.410 00:25:27.410 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70655: Wed Nov 27 04:47:33 2024 00:25:27.410 read: IOPS=34.9k, BW=136MiB/s (143MB/s)(681MiB/5001msec) 00:25:27.410 slat (usec): min=2, max=224, avg= 4.31, stdev= 2.57 00:25:27.410 clat (usec): min=812, max=3724, avg=1660.52, stdev=343.10 00:25:27.410 lat (usec): min=815, max=3731, avg=1664.83, stdev=343.51 00:25:27.410 clat percentiles (usec): 00:25:27.410 | 1.00th=[ 1020], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1369], 00:25:27.410 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1631], 60.00th=[ 1713], 00:25:27.410 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2114], 95.00th=[ 2278], 00:25:27.410 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3163], 99.95th=[ 3261], 00:25:27.410 | 99.99th=[ 3654] 00:25:27.410 bw ( KiB/s): min=128000, max=153088, per=100.00%, avg=140800.00, stdev=9262.11, samples=9 00:25:27.410 iops : min=32000, max=38272, avg=35200.00, stdev=2315.53, samples=9 00:25:27.410 lat (usec) : 1000=0.71% 00:25:27.410 lat (msec) : 2=84.22%, 4=15.07% 00:25:27.410 cpu : usr=52.20%, sys=43.82%, ctx=12, majf=0, minf=762 00:25:27.410 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:25:27.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.410 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:25:27.410 issued rwts: total=174336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.410 00:25:27.410 Run status group 0 (all jobs): 00:25:27.410 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=681MiB (714MB), run=5001-5001msec 00:25:27.410 ----------------------------------------------------- 00:25:27.410 Suppressions used: 00:25:27.410 count bytes template 00:25:27.410 1 11 /usr/src/fio/parse.c 00:25:27.410 1 8 libtcmalloc_minimal.so 00:25:27.410 1 904 libcrypto.so 00:25:27.410 ----------------------------------------------------- 00:25:27.410 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:27.410 04:47:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:27.410 { 00:25:27.410 "subsystems": [ 00:25:27.410 { 00:25:27.410 "subsystem": "bdev", 00:25:27.410 "config": [ 00:25:27.410 { 00:25:27.410 "params": { 00:25:27.410 "io_mechanism": "io_uring", 00:25:27.410 "conserve_cpu": true, 00:25:27.410 "filename": "/dev/nvme0n1", 00:25:27.410 "name": "xnvme_bdev" 00:25:27.410 }, 00:25:27.410 "method": "bdev_xnvme_create" 00:25:27.410 }, 00:25:27.410 { 00:25:27.410 "method": "bdev_wait_for_examine" 00:25:27.410 } 00:25:27.410 ] 00:25:27.410 } 00:25:27.410 ] 00:25:27.410 } 00:25:27.672 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:27.672 fio-3.35 00:25:27.672 Starting 1 thread 00:25:34.292 00:25:34.292 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70748: Wed Nov 27 04:47:40 2024 00:25:34.292 write: IOPS=31.9k, BW=125MiB/s (131MB/s)(624MiB/5005msec); 0 zone resets 00:25:34.292 slat (usec): min=2, max=529, avg= 4.76, stdev= 3.47 00:25:34.292 clat (usec): min=65, max=17020, avg=1828.09, stdev=1619.10 00:25:34.292 lat (usec): min=68, max=17023, avg=1832.85, stdev=1619.19 00:25:34.292 clat percentiles (usec): 00:25:34.292 | 1.00th=[ 611], 5.00th=[ 996], 10.00th=[ 1139], 20.00th=[ 1254], 00:25:34.292 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1516], 60.00th=[ 1614], 00:25:34.292 | 70.00th=[ 1713], 80.00th=[ 1844], 90.00th=[ 2089], 95.00th=[ 2507], 00:25:34.292 | 99.00th=[10683], 99.50th=[11994], 99.90th=[14484], 99.95th=[15795], 00:25:34.292 | 99.99th=[16450] 00:25:34.292 bw ( KiB/s): min=53760, max=156600, per=100.00%, avg=127796.80, stdev=38956.58, samples=10 00:25:34.292 iops : min=13440, max=39150, avg=31949.20, stdev=9739.15, samples=10 00:25:34.292 lat (usec) : 100=0.01%, 250=0.09%, 500=0.70%, 750=1.43%, 1000=2.82% 00:25:34.292 lat (msec) : 2=82.41%, 4=8.58%, 10=2.59%, 20=1.38% 00:25:34.292 cpu : usr=68.92%, sys=26.60%, ctx=12, majf=0, minf=763 00:25:34.292 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.4%, 16=22.9%, 32=53.3%, >=64=2.5% 00:25:34.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.292 complete : 0=0.0%, 4=98.0%, 8=0.2%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:25:34.292 issued rwts: total=0,159807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.292 00:25:34.292 Run status group 0 (all jobs): 00:25:34.292 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=624MiB (655MB), run=5005-5005msec 00:25:34.292 ----------------------------------------------------- 00:25:34.292 Suppressions used: 00:25:34.292 count bytes template 00:25:34.292 1 11 /usr/src/fio/parse.c 00:25:34.292 1 8 libtcmalloc_minimal.so 00:25:34.292 1 904 libcrypto.so 00:25:34.292 ----------------------------------------------------- 00:25:34.292 00:25:34.292 ************************************ 00:25:34.292 END TEST xnvme_fio_plugin 00:25:34.292 ************************************ 00:25:34.292 00:25:34.292 real 0m13.799s 00:25:34.292 user 0m8.957s 00:25:34.292 sys 0m4.077s 00:25:34.292 04:47:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.292 04:47:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:34.292 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:25:34.292 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:25:34.293 04:47:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:25:34.293 04:47:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:34.293 04:47:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.293 04:47:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:34.293 ************************************ 00:25:34.293 START TEST xnvme_rpc 00:25:34.293 ************************************ 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70834 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70834 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70834 ']' 00:25:34.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.293 04:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.554 [2024-11-27 04:47:41.541381] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:34.554 [2024-11-27 04:47:41.541514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70834 ] 00:25:34.554 [2024-11-27 04:47:41.698767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.816 [2024-11-27 04:47:41.802997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.388 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.388 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:35.388 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:25:35.388 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 xnvme_bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70834 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70834 ']' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70834 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70834 00:25:35.389 killing process with pid 70834 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70834' 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70834 00:25:35.389 04:47:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70834 00:25:37.302 00:25:37.303 real 0m2.677s 00:25:37.303 user 0m2.768s 00:25:37.303 sys 0m0.367s 00:25:37.303 04:47:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.303 04:47:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:37.303 ************************************ 00:25:37.303 END TEST xnvme_rpc 00:25:37.303 ************************************ 00:25:37.303 04:47:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:25:37.303 04:47:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.303 04:47:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.303 04:47:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:37.303 ************************************ 00:25:37.303 START TEST xnvme_bdevperf 00:25:37.303 ************************************ 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:37.303 04:47:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.303 { 00:25:37.303 "subsystems": [ 00:25:37.303 { 00:25:37.303 "subsystem": "bdev", 00:25:37.303 "config": [ 00:25:37.303 { 00:25:37.303 "params": { 00:25:37.303 "io_mechanism": "io_uring_cmd", 00:25:37.303 "conserve_cpu": false, 00:25:37.303 "filename": "/dev/ng0n1", 00:25:37.303 "name": "xnvme_bdev" 00:25:37.303 }, 00:25:37.303 "method": "bdev_xnvme_create" 00:25:37.303 }, 00:25:37.303 { 00:25:37.303 "method": "bdev_wait_for_examine" 00:25:37.303 } 00:25:37.303 ] 00:25:37.303 } 00:25:37.303 ] 00:25:37.303 } 00:25:37.303 [2024-11-27 04:47:44.282099] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:37.303 [2024-11-27 04:47:44.282218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70898 ] 00:25:37.303 [2024-11-27 04:47:44.440163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.564 [2024-11-27 04:47:44.544928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.826 Running I/O for 5 seconds... 00:25:39.765 41743.00 IOPS, 163.06 MiB/s [2024-11-27T04:47:47.911Z] 39190.50 IOPS, 153.09 MiB/s [2024-11-27T04:47:48.855Z] 38033.67 IOPS, 148.57 MiB/s [2024-11-27T04:47:50.233Z] 37338.25 IOPS, 145.85 MiB/s 00:25:43.030 Latency(us) 00:25:43.030 [2024-11-27T04:47:50.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.030 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:25:43.030 xnvme_bdev : 5.00 36907.51 144.17 0.00 0.00 1730.08 256.79 12149.37 00:25:43.030 [2024-11-27T04:47:50.233Z] =================================================================================================================== 00:25:43.030 [2024-11-27T04:47:50.233Z] Total : 36907.51 144.17 0.00 0.00 1730.08 256.79 12149.37 00:25:43.603 04:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:43.603 04:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:25:43.603 04:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:43.603 04:47:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:43.603 04:47:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.603 { 00:25:43.603 "subsystems": [ 00:25:43.603 { 00:25:43.603 "subsystem": "bdev", 00:25:43.603 "config": [ 00:25:43.603 { 00:25:43.603 "params": { 00:25:43.603 "io_mechanism": "io_uring_cmd", 00:25:43.603 "conserve_cpu": false, 00:25:43.603 "filename": "/dev/ng0n1", 00:25:43.603 "name": "xnvme_bdev" 00:25:43.603 }, 00:25:43.603 "method": "bdev_xnvme_create" 00:25:43.603 }, 00:25:43.603 { 00:25:43.603 "method": "bdev_wait_for_examine" 00:25:43.603 } 00:25:43.603 ] 00:25:43.603 } 00:25:43.603 ] 00:25:43.603 } 00:25:43.603 [2024-11-27 04:47:50.688324] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:43.603 [2024-11-27 04:47:50.688439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70978 ] 00:25:43.864 [2024-11-27 04:47:50.849318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.864 [2024-11-27 04:47:50.954175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.126 Running I/O for 5 seconds... 00:25:46.455 13288.00 IOPS, 51.91 MiB/s [2024-11-27T04:47:54.229Z] 13046.50 IOPS, 50.96 MiB/s [2024-11-27T04:47:55.613Z] 10225.00 IOPS, 39.94 MiB/s [2024-11-27T04:47:56.550Z] 8992.25 IOPS, 35.13 MiB/s [2024-11-27T04:47:56.550Z] 8842.00 IOPS, 34.54 MiB/s 00:25:49.347 Latency(us) 00:25:49.347 [2024-11-27T04:47:56.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.347 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:25:49.347 xnvme_bdev : 5.01 8836.24 34.52 0.00 0.00 7231.99 58.68 690446.97 00:25:49.347 [2024-11-27T04:47:56.550Z] =================================================================================================================== 00:25:49.347 [2024-11-27T04:47:56.550Z] Total : 8836.24 34.52 0.00 0.00 7231.99 58.68 690446.97 00:25:49.916 04:47:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:49.916 04:47:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:49.916 04:47:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:25:49.916 04:47:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:49.916 04:47:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.916 { 00:25:49.916 "subsystems": [ 00:25:49.916 { 00:25:49.916 "subsystem": "bdev", 00:25:49.916 "config": [ 00:25:49.916 { 00:25:49.916 "params": { 00:25:49.916 "io_mechanism": "io_uring_cmd", 00:25:49.916 "conserve_cpu": false, 00:25:49.916 "filename": "/dev/ng0n1", 00:25:49.916 "name": "xnvme_bdev" 00:25:49.916 }, 00:25:49.916 "method": "bdev_xnvme_create" 00:25:49.916 }, 00:25:49.916 { 00:25:49.916 "method": "bdev_wait_for_examine" 00:25:49.916 } 00:25:49.916 ] 00:25:49.916 } 00:25:49.916 ] 00:25:49.916 } 00:25:49.916 [2024-11-27 04:47:57.017624] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:49.916 [2024-11-27 04:47:57.017746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71052 ] 00:25:50.175 [2024-11-27 04:47:57.178556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.175 [2024-11-27 04:47:57.283302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.435 Running I/O for 5 seconds... 00:25:52.352 65152.00 IOPS, 254.50 MiB/s [2024-11-27T04:48:00.933Z] 63360.00 IOPS, 247.50 MiB/s [2024-11-27T04:48:01.872Z] 62656.00 IOPS, 244.75 MiB/s [2024-11-27T04:48:02.812Z] 62112.00 IOPS, 242.62 MiB/s 00:25:55.609 Latency(us) 00:25:55.609 [2024-11-27T04:48:02.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.609 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:25:55.609 xnvme_bdev : 5.00 62679.91 244.84 0.00 0.00 1017.43 491.52 3012.14 00:25:55.609 [2024-11-27T04:48:02.812Z] =================================================================================================================== 00:25:55.609 [2024-11-27T04:48:02.812Z] Total : 62679.91 244.84 0.00 0.00 1017.43 491.52 3012.14 00:25:56.305 04:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:56.305 04:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:25:56.305 04:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:56.305 04:48:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:56.305 04:48:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:56.305 { 00:25:56.305 "subsystems": [ 00:25:56.305 { 00:25:56.305 "subsystem": "bdev", 00:25:56.305 "config": [ 00:25:56.305 { 00:25:56.305 "params": { 00:25:56.305 "io_mechanism": "io_uring_cmd", 00:25:56.305 "conserve_cpu": false, 00:25:56.305 "filename": "/dev/ng0n1", 00:25:56.305 "name": "xnvme_bdev" 00:25:56.305 }, 00:25:56.305 "method": "bdev_xnvme_create" 00:25:56.305 }, 00:25:56.305 { 00:25:56.305 "method": "bdev_wait_for_examine" 00:25:56.305 } 00:25:56.305 ] 00:25:56.305 } 00:25:56.305 ] 00:25:56.305 } 00:25:56.305 [2024-11-27 04:48:03.331583] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:25:56.305 [2024-11-27 04:48:03.331705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71126 ] 00:25:56.305 [2024-11-27 04:48:03.492809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.564 [2024-11-27 04:48:03.595746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.823 Running I/O for 5 seconds... 00:25:58.698 221.00 IOPS, 0.86 MiB/s [2024-11-27T04:48:07.282Z] 1343.50 IOPS, 5.25 MiB/s [2024-11-27T04:48:07.853Z] 10244.67 IOPS, 40.02 MiB/s [2024-11-27T04:48:09.232Z] 19577.75 IOPS, 76.48 MiB/s [2024-11-27T04:48:09.232Z] 25614.60 IOPS, 100.06 MiB/s 00:26:02.029 Latency(us) 00:26:02.029 [2024-11-27T04:48:09.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.029 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:26:02.029 xnvme_bdev : 5.01 25612.32 100.05 0.00 0.00 2494.11 84.28 767880.27 00:26:02.029 [2024-11-27T04:48:09.232Z] =================================================================================================================== 00:26:02.029 [2024-11-27T04:48:09.232Z] Total : 25612.32 100.05 0.00 0.00 2494.11 84.28 767880.27 00:26:02.599 ************************************ 00:26:02.599 00:26:02.599 real 0m25.346s 00:26:02.599 user 0m13.568s 00:26:02.599 sys 0m11.289s 00:26:02.599 04:48:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.599 04:48:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.599 END TEST xnvme_bdevperf 00:26:02.599 ************************************ 00:26:02.599 04:48:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:26:02.599 04:48:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:02.599 04:48:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.599 04:48:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:02.599 ************************************ 00:26:02.599 START TEST xnvme_fio_plugin 00:26:02.599 ************************************ 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:02.599 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:02.600 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:02.600 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:02.600 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:02.600 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:02.600 04:48:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:02.600 { 00:26:02.600 "subsystems": [ 00:26:02.600 { 00:26:02.600 "subsystem": "bdev", 00:26:02.600 "config": [ 00:26:02.600 { 00:26:02.600 "params": { 00:26:02.600 "io_mechanism": "io_uring_cmd", 00:26:02.600 "conserve_cpu": false, 00:26:02.600 "filename": "/dev/ng0n1", 00:26:02.600 "name": "xnvme_bdev" 00:26:02.600 }, 00:26:02.600 "method": "bdev_xnvme_create" 00:26:02.600 }, 00:26:02.600 { 00:26:02.600 "method": "bdev_wait_for_examine" 00:26:02.600 } 00:26:02.600 ] 00:26:02.600 } 00:26:02.600 ] 00:26:02.600 } 00:26:02.860 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:02.860 fio-3.35 00:26:02.860 Starting 1 thread 00:26:09.459 00:26:09.459 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71240: Wed Nov 27 04:48:15 2024 00:26:09.459 read: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5001msec) 00:26:09.459 slat (nsec): min=2869, max=67527, avg=4224.46, stdev=2597.77 00:26:09.459 clat (usec): min=758, max=4519, avg=1604.64, stdev=335.05 00:26:09.459 lat (usec): min=760, max=4526, avg=1608.87, stdev=335.59 00:26:09.459 clat percentiles (usec): 00:26:09.459 | 1.00th=[ 947], 5.00th=[ 1090], 10.00th=[ 1188], 20.00th=[ 1319], 00:26:09.459 | 30.00th=[ 1418], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:26:09.459 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:26:09.459 | 99.00th=[ 2540], 99.50th=[ 2704], 99.90th=[ 3261], 99.95th=[ 3556], 00:26:09.459 | 99.99th=[ 3687] 00:26:09.459 bw ( KiB/s): min=135168, max=152064, per=100.00%, avg=145029.33, stdev=5152.36, samples=9 00:26:09.459 iops : min=33792, max=38016, avg=36257.33, stdev=1288.09, samples=9 00:26:09.459 lat (usec) : 1000=2.02% 00:26:09.459 lat (msec) : 2=86.99%, 4=10.99%, 10=0.01% 00:26:09.459 cpu : usr=37.84%, sys=60.90%, ctx=11, majf=0, minf=762 00:26:09.459 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:26:09.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.460 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:26:09.460 issued rwts: total=180223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:09.460 00:26:09.460 Run status group 0 (all jobs): 00:26:09.460 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5001-5001msec 00:26:09.460 ----------------------------------------------------- 00:26:09.460 Suppressions used: 00:26:09.460 count bytes template 00:26:09.460 1 11 /usr/src/fio/parse.c 00:26:09.460 1 8 libtcmalloc_minimal.so 00:26:09.460 1 904 libcrypto.so 00:26:09.460 ----------------------------------------------------- 00:26:09.460 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:09.460 04:48:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:09.460 { 00:26:09.460 "subsystems": [ 00:26:09.460 { 00:26:09.460 "subsystem": "bdev", 00:26:09.460 "config": [ 00:26:09.460 { 00:26:09.460 "params": { 00:26:09.460 "io_mechanism": "io_uring_cmd", 00:26:09.460 "conserve_cpu": false, 00:26:09.460 "filename": "/dev/ng0n1", 00:26:09.460 "name": "xnvme_bdev" 00:26:09.460 }, 00:26:09.460 "method": "bdev_xnvme_create" 00:26:09.460 }, 00:26:09.460 { 00:26:09.460 "method": "bdev_wait_for_examine" 00:26:09.460 } 00:26:09.460 ] 00:26:09.460 } 00:26:09.460 ] 00:26:09.460 } 00:26:09.460 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:09.460 fio-3.35 00:26:09.460 Starting 1 thread 00:26:16.044 00:26:16.044 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71331: Wed Nov 27 04:48:22 2024 00:26:16.044 write: IOPS=36.1k, BW=141MiB/s (148MB/s)(706MiB/5001msec); 0 zone resets 00:26:16.044 slat (nsec): min=2907, max=89835, avg=4508.97, stdev=2406.15 00:26:16.044 clat (usec): min=559, max=4694, avg=1594.33, stdev=352.33 00:26:16.044 lat (usec): min=563, max=4699, avg=1598.83, stdev=352.77 00:26:16.044 clat percentiles (usec): 00:26:16.044 | 1.00th=[ 914], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1287], 00:26:16.044 | 30.00th=[ 1401], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:26:16.044 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2040], 95.00th=[ 2180], 00:26:16.044 | 99.00th=[ 2540], 99.50th=[ 2704], 99.90th=[ 3359], 99.95th=[ 3458], 00:26:16.044 | 99.99th=[ 3621] 00:26:16.044 bw ( KiB/s): min=134096, max=164264, per=100.00%, avg=145165.33, stdev=9321.91, samples=9 00:26:16.044 iops : min=33524, max=41066, avg=36291.33, stdev=2330.48, samples=9 00:26:16.044 lat (usec) : 750=0.03%, 1000=3.10% 00:26:16.044 lat (msec) : 2=85.04%, 4=11.83%, 10=0.01% 00:26:16.044 cpu : usr=39.80%, sys=59.06%, ctx=10, majf=0, minf=763 00:26:16.044 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:26:16.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.044 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:26:16.044 issued rwts: total=0,180621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.044 00:26:16.045 Run status group 0 (all jobs): 00:26:16.045 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=706MiB (740MB), run=5001-5001msec 00:26:16.045 ----------------------------------------------------- 00:26:16.045 Suppressions used: 00:26:16.045 count bytes template 00:26:16.045 1 11 /usr/src/fio/parse.c 00:26:16.045 1 8 libtcmalloc_minimal.so 00:26:16.045 1 904 libcrypto.so 00:26:16.045 ----------------------------------------------------- 00:26:16.045 00:26:16.045 00:26:16.045 real 0m13.607s 00:26:16.045 user 0m6.672s 00:26:16.045 sys 0m6.490s 00:26:16.045 04:48:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.045 ************************************ 00:26:16.045 END TEST xnvme_fio_plugin 00:26:16.045 ************************************ 00:26:16.045 04:48:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:16.304 04:48:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:26:16.304 04:48:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:26:16.304 04:48:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:26:16.304 04:48:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:26:16.304 04:48:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:16.304 04:48:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.304 04:48:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:16.304 ************************************ 00:26:16.304 START TEST xnvme_rpc 00:26:16.304 ************************************ 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71416 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71416 00:26:16.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71416 ']' 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.304 04:48:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.304 [2024-11-27 04:48:23.392404] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:16.304 [2024-11-27 04:48:23.392760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71416 ] 00:26:16.565 [2024-11-27 04:48:23.553058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.565 [2024-11-27 04:48:23.661883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 xnvme_bdev 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71416 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71416 ']' 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71416 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71416 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.398 killing process with pid 71416 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71416' 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71416 00:26:17.398 04:48:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71416 00:26:18.786 00:26:18.786 real 0m2.653s 00:26:18.786 user 0m2.765s 00:26:18.786 sys 0m0.344s 00:26:18.786 04:48:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.786 ************************************ 00:26:18.786 END TEST xnvme_rpc 00:26:18.786 ************************************ 00:26:18.786 04:48:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.048 04:48:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:26:19.048 04:48:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:19.048 04:48:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.048 04:48:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:19.048 ************************************ 00:26:19.048 START TEST xnvme_bdevperf 00:26:19.048 ************************************ 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:19.048 04:48:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.048 { 00:26:19.048 "subsystems": [ 00:26:19.048 { 00:26:19.049 "subsystem": "bdev", 00:26:19.049 "config": [ 00:26:19.049 { 00:26:19.049 "params": { 00:26:19.049 "io_mechanism": "io_uring_cmd", 00:26:19.049 "conserve_cpu": true, 00:26:19.049 "filename": "/dev/ng0n1", 00:26:19.049 "name": "xnvme_bdev" 00:26:19.049 }, 00:26:19.049 "method": "bdev_xnvme_create" 00:26:19.049 }, 00:26:19.049 { 00:26:19.049 "method": "bdev_wait_for_examine" 00:26:19.049 } 00:26:19.049 ] 00:26:19.049 } 00:26:19.049 ] 00:26:19.049 } 00:26:19.049 [2024-11-27 04:48:26.088635] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:19.049 [2024-11-27 04:48:26.088768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:26:19.310 [2024-11-27 04:48:26.250296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.310 [2024-11-27 04:48:26.355419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.573 Running I/O for 5 seconds... 00:26:21.478 38195.00 IOPS, 149.20 MiB/s [2024-11-27T04:48:30.064Z] 39069.00 IOPS, 152.61 MiB/s [2024-11-27T04:48:30.632Z] 39858.67 IOPS, 155.70 MiB/s [2024-11-27T04:48:32.010Z] 40486.00 IOPS, 158.15 MiB/s [2024-11-27T04:48:32.010Z] 40842.00 IOPS, 159.54 MiB/s 00:26:24.807 Latency(us) 00:26:24.807 [2024-11-27T04:48:32.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.807 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:24.807 xnvme_bdev : 5.00 40834.84 159.51 0.00 0.00 1563.16 456.86 18450.90 00:26:24.807 [2024-11-27T04:48:32.010Z] =================================================================================================================== 00:26:24.807 [2024-11-27T04:48:32.010Z] Total : 40834.84 159.51 0.00 0.00 1563.16 456.86 18450.90 00:26:25.375 04:48:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:25.376 04:48:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:25.376 04:48:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:26:25.376 04:48:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:25.376 04:48:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.376 { 00:26:25.376 "subsystems": [ 00:26:25.376 { 00:26:25.376 "subsystem": "bdev", 00:26:25.376 "config": [ 00:26:25.376 { 00:26:25.376 "params": { 00:26:25.376 "io_mechanism": "io_uring_cmd", 00:26:25.376 "conserve_cpu": true, 00:26:25.376 "filename": "/dev/ng0n1", 00:26:25.376 "name": "xnvme_bdev" 00:26:25.376 }, 00:26:25.376 "method": "bdev_xnvme_create" 00:26:25.376 }, 00:26:25.376 { 00:26:25.376 "method": "bdev_wait_for_examine" 00:26:25.376 } 00:26:25.376 ] 00:26:25.376 } 00:26:25.376 ] 00:26:25.376 } 00:26:25.376 [2024-11-27 04:48:32.392282] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:25.376 [2024-11-27 04:48:32.392396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71553 ] 00:26:25.376 [2024-11-27 04:48:32.552718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.636 [2024-11-27 04:48:32.654743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.895 Running I/O for 5 seconds... 00:26:27.770 41958.00 IOPS, 163.90 MiB/s [2024-11-27T04:48:35.913Z] 42030.50 IOPS, 164.18 MiB/s [2024-11-27T04:48:37.293Z] 41860.67 IOPS, 163.52 MiB/s [2024-11-27T04:48:38.234Z] 41485.75 IOPS, 162.05 MiB/s [2024-11-27T04:48:38.234Z] 40791.20 IOPS, 159.34 MiB/s 00:26:31.031 Latency(us) 00:26:31.031 [2024-11-27T04:48:38.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.031 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:26:31.031 xnvme_bdev : 5.00 40780.98 159.30 0.00 0.00 1564.09 576.59 6150.30 00:26:31.031 [2024-11-27T04:48:38.234Z] =================================================================================================================== 00:26:31.031 [2024-11-27T04:48:38.234Z] Total : 40780.98 159.30 0.00 0.00 1564.09 576.59 6150.30 00:26:31.601 04:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:31.601 04:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:26:31.601 04:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:31.601 04:48:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:31.601 04:48:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.601 { 00:26:31.601 "subsystems": [ 00:26:31.601 { 00:26:31.601 "subsystem": "bdev", 00:26:31.601 "config": [ 00:26:31.601 { 00:26:31.601 "params": { 00:26:31.601 "io_mechanism": "io_uring_cmd", 00:26:31.601 "conserve_cpu": true, 00:26:31.601 "filename": "/dev/ng0n1", 00:26:31.601 "name": "xnvme_bdev" 00:26:31.601 }, 00:26:31.601 "method": "bdev_xnvme_create" 00:26:31.601 }, 00:26:31.601 { 00:26:31.601 "method": "bdev_wait_for_examine" 00:26:31.601 } 00:26:31.601 ] 00:26:31.601 } 00:26:31.601 ] 00:26:31.601 } 00:26:31.601 [2024-11-27 04:48:38.709779] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:31.601 [2024-11-27 04:48:38.709905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71632 ] 00:26:31.861 [2024-11-27 04:48:38.869735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.861 [2024-11-27 04:48:38.975506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.122 Running I/O for 5 seconds... 00:26:34.451 72512.00 IOPS, 283.25 MiB/s [2024-11-27T04:48:42.597Z] 72192.00 IOPS, 282.00 MiB/s [2024-11-27T04:48:43.543Z] 72106.67 IOPS, 281.67 MiB/s [2024-11-27T04:48:44.489Z] 72144.00 IOPS, 281.81 MiB/s 00:26:37.287 Latency(us) 00:26:37.287 [2024-11-27T04:48:44.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.287 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:26:37.287 xnvme_bdev : 5.00 72115.09 281.70 0.00 0.00 883.95 456.86 3024.74 00:26:37.287 [2024-11-27T04:48:44.490Z] =================================================================================================================== 00:26:37.287 [2024-11-27T04:48:44.490Z] Total : 72115.09 281.70 0.00 0.00 883.95 456.86 3024.74 00:26:37.860 04:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:37.860 04:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:26:37.860 04:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:37.860 04:48:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:37.860 04:48:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.121 { 00:26:38.121 "subsystems": [ 00:26:38.121 { 00:26:38.121 "subsystem": "bdev", 00:26:38.121 "config": [ 00:26:38.121 { 00:26:38.122 "params": { 00:26:38.122 "io_mechanism": "io_uring_cmd", 00:26:38.122 "conserve_cpu": true, 00:26:38.122 "filename": "/dev/ng0n1", 00:26:38.122 "name": "xnvme_bdev" 00:26:38.122 }, 00:26:38.122 "method": "bdev_xnvme_create" 00:26:38.122 }, 00:26:38.122 { 00:26:38.122 "method": "bdev_wait_for_examine" 00:26:38.122 } 00:26:38.122 ] 00:26:38.122 } 00:26:38.122 ] 00:26:38.122 } 00:26:38.122 [2024-11-27 04:48:45.122832] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:38.122 [2024-11-27 04:48:45.122982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:26:38.122 [2024-11-27 04:48:45.289451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.382 [2024-11-27 04:48:45.430033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.644 Running I/O for 5 seconds... 00:26:40.974 37340.00 IOPS, 145.86 MiB/s [2024-11-27T04:48:48.775Z] 35760.00 IOPS, 139.69 MiB/s [2024-11-27T04:48:50.167Z] 34389.33 IOPS, 134.33 MiB/s [2024-11-27T04:48:50.742Z] 32726.25 IOPS, 127.84 MiB/s [2024-11-27T04:48:50.742Z] 29299.80 IOPS, 114.45 MiB/s 00:26:43.539 Latency(us) 00:26:43.539 [2024-11-27T04:48:50.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.539 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:26:43.539 xnvme_bdev : 5.00 29290.55 114.42 0.00 0.00 2179.83 72.07 29239.14 00:26:43.539 [2024-11-27T04:48:50.742Z] =================================================================================================================== 00:26:43.539 [2024-11-27T04:48:50.742Z] Total : 29290.55 114.42 0.00 0.00 2179.83 72.07 29239.14 00:26:44.482 00:26:44.482 real 0m25.539s 00:26:44.482 user 0m18.169s 00:26:44.482 sys 0m5.534s 00:26:44.482 04:48:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.482 ************************************ 00:26:44.482 END TEST xnvme_bdevperf 00:26:44.482 ************************************ 00:26:44.482 04:48:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.482 04:48:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:26:44.482 04:48:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:44.482 04:48:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.482 04:48:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:44.482 ************************************ 00:26:44.482 START TEST xnvme_fio_plugin 00:26:44.482 ************************************ 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:44.482 04:48:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:44.482 { 00:26:44.482 "subsystems": [ 00:26:44.482 { 00:26:44.482 "subsystem": "bdev", 00:26:44.482 "config": [ 00:26:44.482 { 00:26:44.482 "params": { 00:26:44.482 "io_mechanism": "io_uring_cmd", 00:26:44.482 "conserve_cpu": true, 00:26:44.482 "filename": "/dev/ng0n1", 00:26:44.482 "name": "xnvme_bdev" 00:26:44.482 }, 00:26:44.482 "method": "bdev_xnvme_create" 00:26:44.482 }, 00:26:44.482 { 00:26:44.482 "method": "bdev_wait_for_examine" 00:26:44.482 } 00:26:44.482 ] 00:26:44.482 } 00:26:44.482 ] 00:26:44.482 } 00:26:44.748 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:44.748 fio-3.35 00:26:44.748 Starting 1 thread 00:26:51.331 00:26:51.331 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71820: Wed Nov 27 04:48:57 2024 00:26:51.331 read: IOPS=39.7k, BW=155MiB/s (163MB/s)(775MiB/5001msec) 00:26:51.331 slat (usec): min=2, max=143, avg= 4.10, stdev= 2.49 00:26:51.331 clat (usec): min=638, max=4179, avg=1449.02, stdev=319.92 00:26:51.331 lat (usec): min=641, max=4183, avg=1453.12, stdev=320.68 00:26:51.331 clat percentiles (usec): 00:26:51.331 | 1.00th=[ 889], 5.00th=[ 1004], 10.00th=[ 1090], 20.00th=[ 1188], 00:26:51.331 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1401], 60.00th=[ 1483], 00:26:51.331 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1876], 95.00th=[ 2040], 00:26:51.331 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 2999], 99.95th=[ 3195], 00:26:51.331 | 99.99th=[ 3490] 00:26:51.331 bw ( KiB/s): min=139264, max=175104, per=100.00%, avg=160988.44, stdev=12314.43, samples=9 00:26:51.331 iops : min=34816, max=43776, avg=40247.11, stdev=3078.61, samples=9 00:26:51.331 lat (usec) : 750=0.03%, 1000=4.90% 00:26:51.331 lat (msec) : 2=89.09%, 4=5.97%, 10=0.01% 00:26:51.331 cpu : usr=62.20%, sys=34.78%, ctx=14, majf=0, minf=762 00:26:51.331 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:26:51.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.331 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:26:51.331 issued rwts: total=198478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.331 00:26:51.331 Run status group 0 (all jobs): 00:26:51.331 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=775MiB (813MB), run=5001-5001msec 00:26:51.592 ----------------------------------------------------- 00:26:51.592 Suppressions used: 00:26:51.592 count bytes template 00:26:51.592 1 11 /usr/src/fio/parse.c 00:26:51.592 1 8 libtcmalloc_minimal.so 00:26:51.592 1 904 libcrypto.so 00:26:51.592 ----------------------------------------------------- 00:26:51.592 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:51.592 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:51.593 04:48:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:51.593 { 00:26:51.593 "subsystems": [ 00:26:51.593 { 00:26:51.593 "subsystem": "bdev", 00:26:51.593 "config": [ 00:26:51.593 { 00:26:51.593 "params": { 00:26:51.593 "io_mechanism": "io_uring_cmd", 00:26:51.593 "conserve_cpu": true, 00:26:51.593 "filename": "/dev/ng0n1", 00:26:51.593 "name": "xnvme_bdev" 00:26:51.593 }, 00:26:51.593 "method": "bdev_xnvme_create" 00:26:51.593 }, 00:26:51.593 { 00:26:51.593 "method": "bdev_wait_for_examine" 00:26:51.593 } 00:26:51.593 ] 00:26:51.593 } 00:26:51.593 ] 00:26:51.593 } 00:26:51.852 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:51.852 fio-3.35 00:26:51.852 Starting 1 thread 00:26:58.445 00:26:58.445 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71911: Wed Nov 27 04:49:04 2024 00:26:58.445 write: IOPS=33.4k, BW=131MiB/s (137MB/s)(653MiB/5002msec); 0 zone resets 00:26:58.445 slat (usec): min=2, max=385, avg= 4.90, stdev= 3.08 00:26:58.445 clat (usec): min=88, max=46711, avg=1718.06, stdev=882.02 00:26:58.445 lat (usec): min=98, max=46714, avg=1722.96, stdev=882.27 00:26:58.445 clat percentiles (usec): 00:26:58.445 | 1.00th=[ 1020], 5.00th=[ 1188], 10.00th=[ 1287], 20.00th=[ 1401], 00:26:58.445 | 30.00th=[ 1500], 40.00th=[ 1582], 50.00th=[ 1663], 60.00th=[ 1745], 00:26:58.445 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2147], 95.00th=[ 2311], 00:26:58.445 | 99.00th=[ 2704], 99.50th=[ 3032], 99.90th=[ 5276], 99.95th=[13435], 00:26:58.445 | 99.99th=[44303] 00:26:58.445 bw ( KiB/s): min=123320, max=146296, per=99.05%, avg=132471.11, stdev=6662.49, samples=9 00:26:58.445 iops : min=30830, max=36574, avg=33117.78, stdev=1665.62, samples=9 00:26:58.445 lat (usec) : 100=0.01%, 250=0.01%, 500=0.03%, 750=0.06%, 1000=0.67% 00:26:58.445 lat (msec) : 2=81.83%, 4=17.23%, 10=0.11%, 20=0.03%, 50=0.04% 00:26:58.445 cpu : usr=59.55%, sys=36.59%, ctx=15, majf=0, minf=763 00:26:58.445 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.3%, 16=25.0%, 32=50.6%, >=64=1.7% 00:26:58.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.445 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:26:58.445 issued rwts: total=0,167249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.445 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:58.445 00:26:58.445 Run status group 0 (all jobs): 00:26:58.445 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=653MiB (685MB), run=5002-5002msec 00:26:58.445 ----------------------------------------------------- 00:26:58.445 Suppressions used: 00:26:58.445 count bytes template 00:26:58.445 1 11 /usr/src/fio/parse.c 00:26:58.445 1 8 libtcmalloc_minimal.so 00:26:58.445 1 904 libcrypto.so 00:26:58.445 ----------------------------------------------------- 00:26:58.445 00:26:58.445 00:26:58.445 real 0m13.836s 00:26:58.445 user 0m8.968s 00:26:58.445 sys 0m4.195s 00:26:58.445 ************************************ 00:26:58.445 END TEST xnvme_fio_plugin 00:26:58.445 ************************************ 00:26:58.445 04:49:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.445 04:49:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:58.445 04:49:05 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71416 00:26:58.445 04:49:05 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71416 ']' 00:26:58.445 04:49:05 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71416 00:26:58.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71416) - No such process 00:26:58.445 Process with pid 71416 is not found 00:26:58.445 04:49:05 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71416 is not found' 00:26:58.445 ************************************ 00:26:58.445 END TEST nvme_xnvme 00:26:58.445 ************************************ 00:26:58.445 00:26:58.445 real 3m28.485s 00:26:58.445 user 2m4.053s 00:26:58.445 sys 1m10.364s 00:26:58.445 04:49:05 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.445 04:49:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:58.445 04:49:05 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:58.445 04:49:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.445 04:49:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.445 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:26:58.445 ************************************ 00:26:58.445 START TEST blockdev_xnvme 00:26:58.445 ************************************ 00:26:58.445 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:58.445 * Looking for test storage... 00:26:58.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:58.445 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.445 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.706 04:49:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.706 --rc genhtml_branch_coverage=1 00:26:58.706 --rc genhtml_function_coverage=1 00:26:58.706 --rc genhtml_legend=1 00:26:58.706 --rc geninfo_all_blocks=1 00:26:58.706 --rc geninfo_unexecuted_blocks=1 00:26:58.706 00:26:58.706 ' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.706 --rc genhtml_branch_coverage=1 00:26:58.706 --rc genhtml_function_coverage=1 00:26:58.706 --rc genhtml_legend=1 00:26:58.706 --rc geninfo_all_blocks=1 00:26:58.706 --rc geninfo_unexecuted_blocks=1 00:26:58.706 00:26:58.706 ' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.706 --rc genhtml_branch_coverage=1 00:26:58.706 --rc genhtml_function_coverage=1 00:26:58.706 --rc genhtml_legend=1 00:26:58.706 --rc geninfo_all_blocks=1 00:26:58.706 --rc geninfo_unexecuted_blocks=1 00:26:58.706 00:26:58.706 ' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.706 --rc genhtml_branch_coverage=1 00:26:58.706 --rc genhtml_function_coverage=1 00:26:58.706 --rc genhtml_legend=1 00:26:58.706 --rc geninfo_all_blocks=1 00:26:58.706 --rc geninfo_unexecuted_blocks=1 00:26:58.706 00:26:58.706 ' 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:26:58.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72045 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:58.706 04:49:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72045 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72045 ']' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.706 04:49:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:58.706 [2024-11-27 04:49:05.820040] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:26:58.706 [2024-11-27 04:49:05.820172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72045 ] 00:26:58.966 [2024-11-27 04:49:05.980833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.966 [2024-11-27 04:49:06.082432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.535 04:49:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.535 04:49:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:26:59.535 04:49:06 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:26:59.535 04:49:06 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:26:59.535 04:49:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:26:59.535 04:49:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:26:59.535 04:49:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:00.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.673 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:00.673 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:00.673 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:00.673 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:27:00.673 nvme0n1 00:27:00.673 nvme0n2 00:27:00.673 nvme0n3 00:27:00.673 nvme1n1 00:27:00.673 nvme2n1 00:27:00.673 nvme3n1 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:27:00.673 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.673 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.674 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.674 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.674 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "300989b9-e37b-4675-b2e2-47bb2662b777"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "300989b9-e37b-4675-b2e2-47bb2662b777",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c238ad38-0a94-4e58-8ec4-8d8bf6071aed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c238ad38-0a94-4e58-8ec4-8d8bf6071aed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5c5e4dc1-c6c3-472c-bb7b-866bfe9550ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5c5e4dc1-c6c3-472c-bb7b-866bfe9550ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3bcdbf3c-a25f-4af1-8e11-3d50f85520d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3bcdbf3c-a25f-4af1-8e11-3d50f85520d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f131b651-952b-44c0-884b-1bfae0e2c157"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f131b651-952b-44c0-884b-1bfae0e2c157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "649176f1-9d2f-4f99-aa26-bb32c57cfa45"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "649176f1-9d2f-4f99-aa26-bb32c57cfa45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:27:00.935 04:49:07 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72045 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72045 ']' 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72045 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72045 00:27:00.935 killing process with pid 72045 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72045' 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72045 00:27:00.935 04:49:07 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72045 00:27:02.319 04:49:09 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:02.319 04:49:09 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:27:02.319 04:49:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:02.319 04:49:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.319 04:49:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:02.580 ************************************ 00:27:02.580 START TEST bdev_hello_world 00:27:02.580 ************************************ 00:27:02.580 04:49:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:27:02.581 [2024-11-27 04:49:09.604346] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:02.581 [2024-11-27 04:49:09.604508] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72324 ] 00:27:02.581 [2024-11-27 04:49:09.772104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.842 [2024-11-27 04:49:09.908984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.414 [2024-11-27 04:49:10.336142] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:03.414 [2024-11-27 04:49:10.336375] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:27:03.414 [2024-11-27 04:49:10.336406] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:03.414 [2024-11-27 04:49:10.338621] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:03.414 [2024-11-27 04:49:10.339973] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:03.414 [2024-11-27 04:49:10.340159] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:03.414 [2024-11-27 04:49:10.340671] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:03.414 00:27:03.414 [2024-11-27 04:49:10.340718] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:04.357 00:27:04.358 real 0m1.677s 00:27:04.358 user 0m1.268s 00:27:04.358 sys 0m0.243s 00:27:04.358 04:49:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.358 ************************************ 00:27:04.358 END TEST bdev_hello_world 00:27:04.358 ************************************ 00:27:04.358 04:49:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:04.358 04:49:11 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:27:04.358 04:49:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.358 04:49:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.358 04:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:04.358 ************************************ 00:27:04.358 START TEST bdev_bounds 00:27:04.358 ************************************ 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:27:04.358 Process bdevio pid: 72362 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72362 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72362' 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72362 00:27:04.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72362 ']' 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.358 04:49:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:04.358 [2024-11-27 04:49:11.340902] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:04.358 [2024-11-27 04:49:11.341028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:27:04.358 [2024-11-27 04:49:11.501580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:04.618 [2024-11-27 04:49:11.608198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.618 [2024-11-27 04:49:11.608595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.618 [2024-11-27 04:49:11.608611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.186 04:49:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.187 04:49:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:27:05.187 04:49:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:05.187 I/O targets: 00:27:05.187 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:05.187 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:05.187 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:05.187 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:27:05.187 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:05.187 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:27:05.187 00:27:05.187 00:27:05.187 CUnit - A unit testing framework for C - Version 2.1-3 00:27:05.187 http://cunit.sourceforge.net/ 00:27:05.187 00:27:05.187 00:27:05.187 Suite: bdevio tests on: nvme3n1 00:27:05.187 Test: blockdev write read block ...passed 00:27:05.187 Test: blockdev write zeroes read block ...passed 00:27:05.187 Test: blockdev write zeroes read no split ...passed 00:27:05.187 Test: blockdev write zeroes read split ...passed 00:27:05.187 Test: blockdev write zeroes read split partial ...passed 00:27:05.187 Test: blockdev reset ...passed 00:27:05.187 Test: blockdev write read 8 blocks ...passed 00:27:05.187 Test: blockdev write read size > 128k ...passed 00:27:05.187 Test: blockdev write read invalid size ...passed 00:27:05.187 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.187 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.187 Test: blockdev write read max offset ...passed 00:27:05.187 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.187 Test: blockdev writev readv 8 blocks ...passed 00:27:05.187 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.187 Test: blockdev writev readv block ...passed 00:27:05.187 Test: blockdev writev readv size > 128k ...passed 00:27:05.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.187 Test: blockdev comparev and writev ...passed 00:27:05.187 Test: blockdev nvme passthru rw ...passed 00:27:05.187 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.187 Test: blockdev nvme admin passthru ...passed 00:27:05.187 Test: blockdev copy ...passed 00:27:05.187 Suite: bdevio tests on: nvme2n1 00:27:05.187 Test: blockdev write read block ...passed 00:27:05.187 Test: blockdev write zeroes read block ...passed 00:27:05.187 Test: blockdev write zeroes read no split ...passed 00:27:05.449 Test: blockdev write zeroes read split ...passed 00:27:05.449 Test: blockdev write zeroes read split partial ...passed 00:27:05.449 Test: blockdev reset ...passed 00:27:05.449 Test: blockdev write read 8 blocks ...passed 00:27:05.449 Test: blockdev write read size > 128k ...passed 00:27:05.449 Test: blockdev write read invalid size ...passed 00:27:05.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.449 Test: blockdev write read max offset ...passed 00:27:05.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.449 Test: blockdev writev readv 8 blocks ...passed 00:27:05.449 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.449 Test: blockdev writev readv block ...passed 00:27:05.449 Test: blockdev writev readv size > 128k ...passed 00:27:05.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.449 Test: blockdev comparev and writev ...passed 00:27:05.449 Test: blockdev nvme passthru rw ...passed 00:27:05.449 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.449 Test: blockdev nvme admin passthru ...passed 00:27:05.449 Test: blockdev copy ...passed 00:27:05.449 Suite: bdevio tests on: nvme1n1 00:27:05.449 Test: blockdev write read block ...passed 00:27:05.449 Test: blockdev write zeroes read block ...passed 00:27:05.449 Test: blockdev write zeroes read no split ...passed 00:27:05.449 Test: blockdev write zeroes read split ...passed 00:27:05.449 Test: blockdev write zeroes read split partial ...passed 00:27:05.449 Test: blockdev reset ...passed 00:27:05.449 Test: blockdev write read 8 blocks ...passed 00:27:05.449 Test: blockdev write read size > 128k ...passed 00:27:05.449 Test: blockdev write read invalid size ...passed 00:27:05.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.449 Test: blockdev write read max offset ...passed 00:27:05.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.449 Test: blockdev writev readv 8 blocks ...passed 00:27:05.449 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.449 Test: blockdev writev readv block ...passed 00:27:05.449 Test: blockdev writev readv size > 128k ...passed 00:27:05.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.449 Test: blockdev comparev and writev ...passed 00:27:05.449 Test: blockdev nvme passthru rw ...passed 00:27:05.449 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.449 Test: blockdev nvme admin passthru ...passed 00:27:05.449 Test: blockdev copy ...passed 00:27:05.449 Suite: bdevio tests on: nvme0n3 00:27:05.449 Test: blockdev write read block ...passed 00:27:05.449 Test: blockdev write zeroes read block ...passed 00:27:05.449 Test: blockdev write zeroes read no split ...passed 00:27:05.449 Test: blockdev write zeroes read split ...passed 00:27:05.449 Test: blockdev write zeroes read split partial ...passed 00:27:05.449 Test: blockdev reset ...passed 00:27:05.449 Test: blockdev write read 8 blocks ...passed 00:27:05.449 Test: blockdev write read size > 128k ...passed 00:27:05.449 Test: blockdev write read invalid size ...passed 00:27:05.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.449 Test: blockdev write read max offset ...passed 00:27:05.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.449 Test: blockdev writev readv 8 blocks ...passed 00:27:05.449 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.449 Test: blockdev writev readv block ...passed 00:27:05.449 Test: blockdev writev readv size > 128k ...passed 00:27:05.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.449 Test: blockdev comparev and writev ...passed 00:27:05.449 Test: blockdev nvme passthru rw ...passed 00:27:05.449 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.449 Test: blockdev nvme admin passthru ...passed 00:27:05.449 Test: blockdev copy ...passed 00:27:05.449 Suite: bdevio tests on: nvme0n2 00:27:05.449 Test: blockdev write read block ...passed 00:27:05.449 Test: blockdev write zeroes read block ...passed 00:27:05.449 Test: blockdev write zeroes read no split ...passed 00:27:05.449 Test: blockdev write zeroes read split ...passed 00:27:05.449 Test: blockdev write zeroes read split partial ...passed 00:27:05.449 Test: blockdev reset ...passed 00:27:05.449 Test: blockdev write read 8 blocks ...passed 00:27:05.449 Test: blockdev write read size > 128k ...passed 00:27:05.449 Test: blockdev write read invalid size ...passed 00:27:05.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.449 Test: blockdev write read max offset ...passed 00:27:05.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.449 Test: blockdev writev readv 8 blocks ...passed 00:27:05.449 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.711 Test: blockdev writev readv block ...passed 00:27:05.711 Test: blockdev writev readv size > 128k ...passed 00:27:05.711 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.711 Test: blockdev comparev and writev ...passed 00:27:05.711 Test: blockdev nvme passthru rw ...passed 00:27:05.711 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.711 Test: blockdev nvme admin passthru ...passed 00:27:05.711 Test: blockdev copy ...passed 00:27:05.711 Suite: bdevio tests on: nvme0n1 00:27:05.711 Test: blockdev write read block ...passed 00:27:05.711 Test: blockdev write zeroes read block ...passed 00:27:05.711 Test: blockdev write zeroes read no split ...passed 00:27:05.973 Test: blockdev write zeroes read split ...passed 00:27:05.973 Test: blockdev write zeroes read split partial ...passed 00:27:05.973 Test: blockdev reset ...passed 00:27:05.973 Test: blockdev write read 8 blocks ...passed 00:27:05.973 Test: blockdev write read size > 128k ...passed 00:27:05.973 Test: blockdev write read invalid size ...passed 00:27:05.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:05.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:05.973 Test: blockdev write read max offset ...passed 00:27:05.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:05.973 Test: blockdev writev readv 8 blocks ...passed 00:27:05.973 Test: blockdev writev readv 30 x 1block ...passed 00:27:05.973 Test: blockdev writev readv block ...passed 00:27:05.973 Test: blockdev writev readv size > 128k ...passed 00:27:05.973 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:05.973 Test: blockdev comparev and writev ...passed 00:27:05.973 Test: blockdev nvme passthru rw ...passed 00:27:05.973 Test: blockdev nvme passthru vendor specific ...passed 00:27:05.973 Test: blockdev nvme admin passthru ...passed 00:27:05.973 Test: blockdev copy ...passed 00:27:05.973 00:27:05.973 Run Summary: Type Total Ran Passed Failed Inactive 00:27:05.973 suites 6 6 n/a 0 0 00:27:05.973 tests 138 138 138 0 0 00:27:05.973 asserts 780 780 780 0 n/a 00:27:05.973 00:27:05.973 Elapsed time = 1.694 seconds 00:27:05.973 0 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72362 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72362 ']' 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72362 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72362 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72362' 00:27:05.973 killing process with pid 72362 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72362 00:27:05.973 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72362 00:27:06.916 04:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:27:06.916 00:27:06.916 real 0m2.495s 00:27:06.916 user 0m6.010s 00:27:06.916 sys 0m0.314s 00:27:06.916 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.916 ************************************ 00:27:06.916 END TEST bdev_bounds 00:27:06.916 ************************************ 00:27:06.916 04:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:06.916 04:49:13 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:27:06.916 04:49:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:06.916 04:49:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.916 04:49:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:06.916 ************************************ 00:27:06.916 START TEST bdev_nbd 00:27:06.916 ************************************ 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72422 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72422 /var/tmp/spdk-nbd.sock 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72422 ']' 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.916 04:49:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:06.917 [2024-11-27 04:49:13.921688] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:06.917 [2024-11-27 04:49:13.921800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.917 [2024-11-27 04:49:14.089207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.178 [2024-11-27 04:49:14.189955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:07.751 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:08.013 04:49:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:08.013 1+0 records in 00:27:08.013 1+0 records out 00:27:08.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091102 s, 4.5 MB/s 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:08.013 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:08.274 1+0 records in 00:27:08.274 1+0 records out 00:27:08.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135193 s, 3.0 MB/s 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:08.274 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:08.535 1+0 records in 00:27:08.535 1+0 records out 00:27:08.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978213 s, 4.2 MB/s 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:08.535 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:08.797 1+0 records in 00:27:08.797 1+0 records out 00:27:08.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107093 s, 3.8 MB/s 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:08.797 04:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:09.059 1+0 records in 00:27:09.059 1+0 records out 00:27:09.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00162982 s, 2.5 MB/s 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:09.059 1+0 records in 00:27:09.059 1+0 records out 00:27:09.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128523 s, 3.2 MB/s 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:09.059 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd0", 00:27:09.321 "bdev_name": "nvme0n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd1", 00:27:09.321 "bdev_name": "nvme0n2" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd2", 00:27:09.321 "bdev_name": "nvme0n3" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd3", 00:27:09.321 "bdev_name": "nvme1n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd4", 00:27:09.321 "bdev_name": "nvme2n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd5", 00:27:09.321 "bdev_name": "nvme3n1" 00:27:09.321 } 00:27:09.321 ]' 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd0", 00:27:09.321 "bdev_name": "nvme0n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd1", 00:27:09.321 "bdev_name": "nvme0n2" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd2", 00:27:09.321 "bdev_name": "nvme0n3" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd3", 00:27:09.321 "bdev_name": "nvme1n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd4", 00:27:09.321 "bdev_name": "nvme2n1" 00:27:09.321 }, 00:27:09.321 { 00:27:09.321 "nbd_device": "/dev/nbd5", 00:27:09.321 "bdev_name": "nvme3n1" 00:27:09.321 } 00:27:09.321 ]' 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:09.321 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:09.582 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:09.583 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:09.843 04:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.105 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.366 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:27:10.367 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.629 04:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:10.891 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:27:11.155 /dev/nbd0 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.155 1+0 records in 00:27:11.155 1+0 records out 00:27:11.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00092071 s, 4.4 MB/s 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:11.155 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:27:11.438 /dev/nbd1 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.438 1+0 records in 00:27:11.438 1+0 records out 00:27:11.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880458 s, 4.7 MB/s 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:11.438 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:27:11.700 /dev/nbd10 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.700 1+0 records in 00:27:11.700 1+0 records out 00:27:11.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012971 s, 3.2 MB/s 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:11.700 04:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:27:11.963 /dev/nbd11 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.963 1+0 records in 00:27:11.963 1+0 records out 00:27:11.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137492 s, 3.0 MB/s 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:11.963 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:27:12.223 /dev/nbd12 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:12.223 1+0 records in 00:27:12.223 1+0 records out 00:27:12.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118539 s, 3.5 MB/s 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:12.223 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:27:12.485 /dev/nbd13 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:12.485 1+0 records in 00:27:12.485 1+0 records out 00:27:12.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0016057 s, 2.6 MB/s 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:12.485 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd0", 00:27:12.748 "bdev_name": "nvme0n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd1", 00:27:12.748 "bdev_name": "nvme0n2" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd10", 00:27:12.748 "bdev_name": "nvme0n3" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd11", 00:27:12.748 "bdev_name": "nvme1n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd12", 00:27:12.748 "bdev_name": "nvme2n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd13", 00:27:12.748 "bdev_name": "nvme3n1" 00:27:12.748 } 00:27:12.748 ]' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd0", 00:27:12.748 "bdev_name": "nvme0n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd1", 00:27:12.748 "bdev_name": "nvme0n2" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd10", 00:27:12.748 "bdev_name": "nvme0n3" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd11", 00:27:12.748 "bdev_name": "nvme1n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd12", 00:27:12.748 "bdev_name": "nvme2n1" 00:27:12.748 }, 00:27:12.748 { 00:27:12.748 "nbd_device": "/dev/nbd13", 00:27:12.748 "bdev_name": "nvme3n1" 00:27:12.748 } 00:27:12.748 ]' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:12.748 /dev/nbd1 00:27:12.748 /dev/nbd10 00:27:12.748 /dev/nbd11 00:27:12.748 /dev/nbd12 00:27:12.748 /dev/nbd13' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:12.748 /dev/nbd1 00:27:12.748 /dev/nbd10 00:27:12.748 /dev/nbd11 00:27:12.748 /dev/nbd12 00:27:12.748 /dev/nbd13' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:12.748 256+0 records in 00:27:12.748 256+0 records out 00:27:12.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104864 s, 100 MB/s 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:12.748 04:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:13.010 256+0 records in 00:27:13.010 256+0 records out 00:27:13.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250943 s, 4.2 MB/s 00:27:13.010 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:13.010 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:13.270 256+0 records in 00:27:13.270 256+0 records out 00:27:13.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249167 s, 4.2 MB/s 00:27:13.270 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:13.270 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:13.530 256+0 records in 00:27:13.531 256+0 records out 00:27:13.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210547 s, 5.0 MB/s 00:27:13.531 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:13.531 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:13.791 256+0 records in 00:27:13.791 256+0 records out 00:27:13.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259844 s, 4.0 MB/s 00:27:13.792 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:13.792 04:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:27:14.054 256+0 records in 00:27:14.054 256+0 records out 00:27:14.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257387 s, 4.1 MB/s 00:27:14.054 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:14.054 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:27:14.316 256+0 records in 00:27:14.316 256+0 records out 00:27:14.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.290463 s, 3.6 MB/s 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.316 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.577 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.838 04:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.099 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.360 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.621 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:27:15.883 04:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:16.145 malloc_lvol_verify 00:27:16.145 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:16.406 7461c105-e14d-4d20-b1ae-88add38cddbe 00:27:16.406 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:16.406 bd792793-0af1-4760-9cb6-b5d083e9fc61 00:27:16.406 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:16.669 /dev/nbd0 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:27:16.669 mke2fs 1.47.0 (5-Feb-2023) 00:27:16.669 Discarding device blocks: 0/4096 done 00:27:16.669 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:16.669 00:27:16.669 Allocating group tables: 0/1 done 00:27:16.669 Writing inode tables: 0/1 done 00:27:16.669 Creating journal (1024 blocks): done 00:27:16.669 Writing superblocks and filesystem accounting information: 0/1 done 00:27:16.669 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:16.669 04:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72422 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72422 ']' 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72422 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72422 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:16.930 killing process with pid 72422 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72422' 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72422 00:27:16.930 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72422 00:27:17.876 04:49:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:27:17.876 00:27:17.876 real 0m10.987s 00:27:17.876 user 0m14.715s 00:27:17.876 sys 0m3.573s 00:27:17.876 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.876 04:49:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:17.876 ************************************ 00:27:17.876 END TEST bdev_nbd 00:27:17.876 ************************************ 00:27:17.876 04:49:24 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:27:17.876 04:49:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:27:17.876 04:49:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:27:17.876 04:49:24 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:27:17.876 04:49:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.876 04:49:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.876 04:49:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:17.876 ************************************ 00:27:17.876 START TEST bdev_fio 00:27:17.876 ************************************ 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:27:17.876 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:27:17.876 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:17.877 ************************************ 00:27:17.877 START TEST bdev_fio_rw_verify 00:27:17.877 ************************************ 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:27:17.877 04:49:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:17.877 04:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:17.877 04:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:17.877 04:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:27:17.877 04:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:17.877 04:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:18.137 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:18.137 fio-3.35 00:27:18.137 Starting 6 threads 00:27:30.378 00:27:30.378 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72837: Wed Nov 27 04:49:35 2024 00:27:30.378 read: IOPS=14.4k, BW=56.1MiB/s (58.8MB/s)(561MiB/10003msec) 00:27:30.378 slat (usec): min=2, max=3059, avg= 6.13, stdev=13.20 00:27:30.378 clat (usec): min=80, max=9795, avg=1308.75, stdev=740.19 00:27:30.378 lat (usec): min=84, max=9806, avg=1314.88, stdev=740.59 00:27:30.378 clat percentiles (usec): 00:27:30.378 | 50.000th=[ 1205], 99.000th=[ 3589], 99.900th=[ 4686], 99.990th=[ 5866], 00:27:30.378 | 99.999th=[ 9765] 00:27:30.378 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(576MiB/10003msec); 0 zone resets 00:27:30.378 slat (usec): min=13, max=6521, avg=42.53, stdev=144.11 00:27:30.378 clat (usec): min=83, max=22900, avg=1644.17, stdev=868.67 00:27:30.378 lat (usec): min=97, max=22928, avg=1686.70, stdev=880.32 00:27:30.378 clat percentiles (usec): 00:27:30.378 | 50.000th=[ 1516], 99.000th=[ 4228], 99.900th=[ 5997], 99.990th=[19268], 00:27:30.378 | 99.999th=[22938] 00:27:30.378 bw ( KiB/s): min=48868, max=75016, per=100.00%, avg=59245.84, stdev=1326.71, samples=114 00:27:30.378 iops : min=12215, max=18754, avg=14810.89, stdev=331.69, samples=114 00:27:30.378 lat (usec) : 100=0.01%, 250=2.18%, 500=6.22%, 750=8.86%, 1000=12.10% 00:27:30.378 lat (msec) : 2=49.12%, 4=20.57%, 10=0.94%, 20=0.01%, 50=0.01% 00:27:30.378 cpu : usr=43.36%, sys=29.05%, ctx=5544, majf=0, minf=14686 00:27:30.378 IO depths : 1=11.3%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.378 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.378 issued rwts: total=143648,147543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.378 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:30.378 00:27:30.378 Run status group 0 (all jobs): 00:27:30.378 READ: bw=56.1MiB/s (58.8MB/s), 56.1MiB/s-56.1MiB/s (58.8MB/s-58.8MB/s), io=561MiB (588MB), run=10003-10003msec 00:27:30.378 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10003-10003msec 00:27:30.378 ----------------------------------------------------- 00:27:30.378 Suppressions used: 00:27:30.378 count bytes template 00:27:30.378 6 48 /usr/src/fio/parse.c 00:27:30.378 3803 365088 /usr/src/fio/iolog.c 00:27:30.378 1 8 libtcmalloc_minimal.so 00:27:30.378 1 904 libcrypto.so 00:27:30.378 ----------------------------------------------------- 00:27:30.378 00:27:30.378 00:27:30.378 real 0m11.985s 00:27:30.378 user 0m27.543s 00:27:30.378 sys 0m17.732s 00:27:30.378 04:49:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.378 ************************************ 00:27:30.378 04:49:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:30.378 END TEST bdev_fio_rw_verify 00:27:30.378 ************************************ 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:27:30.378 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "300989b9-e37b-4675-b2e2-47bb2662b777"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "300989b9-e37b-4675-b2e2-47bb2662b777",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c238ad38-0a94-4e58-8ec4-8d8bf6071aed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c238ad38-0a94-4e58-8ec4-8d8bf6071aed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5c5e4dc1-c6c3-472c-bb7b-866bfe9550ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5c5e4dc1-c6c3-472c-bb7b-866bfe9550ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3bcdbf3c-a25f-4af1-8e11-3d50f85520d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3bcdbf3c-a25f-4af1-8e11-3d50f85520d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f131b651-952b-44c0-884b-1bfae0e2c157"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f131b651-952b-44c0-884b-1bfae0e2c157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "649176f1-9d2f-4f99-aa26-bb32c57cfa45"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "649176f1-9d2f-4f99-aa26-bb32c57cfa45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:30.379 /home/vagrant/spdk_repo/spdk 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:27:30.379 00:27:30.379 real 0m12.168s 00:27:30.379 user 0m27.613s 00:27:30.379 sys 0m17.811s 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.379 ************************************ 00:27:30.379 END TEST bdev_fio 00:27:30.379 ************************************ 00:27:30.379 04:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:30.379 04:49:37 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:30.379 04:49:37 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:30.379 04:49:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:27:30.379 04:49:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.379 04:49:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:30.379 ************************************ 00:27:30.379 START TEST bdev_verify 00:27:30.379 ************************************ 00:27:30.379 04:49:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:30.379 [2024-11-27 04:49:37.214222] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:30.379 [2024-11-27 04:49:37.214364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73009 ] 00:27:30.379 [2024-11-27 04:49:37.380548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:30.379 [2024-11-27 04:49:37.518800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.379 [2024-11-27 04:49:37.518913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.964 Running I/O for 5 seconds... 00:27:33.290 23840.00 IOPS, 93.12 MiB/s [2024-11-27T04:49:41.437Z] 23552.00 IOPS, 92.00 MiB/s [2024-11-27T04:49:42.378Z] 23205.67 IOPS, 90.65 MiB/s [2024-11-27T04:49:43.323Z] 22936.00 IOPS, 89.59 MiB/s [2024-11-27T04:49:43.323Z] 23180.80 IOPS, 90.55 MiB/s 00:27:36.120 Latency(us) 00:27:36.120 [2024-11-27T04:49:43.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.120 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0x80000 00:27:36.120 nvme0n1 : 5.09 1809.74 7.07 0.00 0.00 70602.92 8670.92 67754.14 00:27:36.120 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x80000 length 0x80000 00:27:36.120 nvme0n1 : 5.07 1817.78 7.10 0.00 0.00 70278.42 11241.94 81062.99 00:27:36.120 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0x80000 00:27:36.120 nvme0n2 : 5.07 1816.59 7.10 0.00 0.00 70195.47 5948.65 64931.05 00:27:36.120 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x80000 length 0x80000 00:27:36.120 nvme0n2 : 5.07 1791.88 7.00 0.00 0.00 71162.33 12552.66 72997.02 00:27:36.120 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0x80000 00:27:36.120 nvme0n3 : 5.10 1808.80 7.07 0.00 0.00 70379.51 9275.86 65737.65 00:27:36.120 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x80000 length 0x80000 00:27:36.120 nvme0n3 : 5.07 1791.21 7.00 0.00 0.00 71041.21 10384.94 72997.02 00:27:36.120 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0x20000 00:27:36.120 nvme1n1 : 5.10 1806.59 7.06 0.00 0.00 70302.69 6604.01 65737.65 00:27:36.120 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x20000 length 0x20000 00:27:36.120 nvme1n1 : 5.09 1785.28 6.97 0.00 0.00 70995.63 11897.30 70173.93 00:27:36.120 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0xa0000 00:27:36.120 nvme2n1 : 5.08 1813.54 7.08 0.00 0.00 69844.24 6351.95 66544.25 00:27:36.120 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0xa0000 length 0xa0000 00:27:36.120 nvme2n1 : 5.08 1787.78 6.98 0.00 0.00 70764.21 14720.39 64527.75 00:27:36.120 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0x0 length 0xbd0bd 00:27:36.120 nvme3n1 : 5.10 2495.01 9.75 0.00 0.00 50581.68 6755.25 59688.17 00:27:36.120 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.120 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:27:36.120 nvme3n1 : 5.09 2386.84 9.32 0.00 0.00 52769.54 7158.55 68157.44 00:27:36.120 [2024-11-27T04:49:43.323Z] =================================================================================================================== 00:27:36.120 [2024-11-27T04:49:43.323Z] Total : 22911.04 89.50 0.00 0.00 66516.00 5948.65 81062.99 00:27:37.061 ************************************ 00:27:37.061 END TEST bdev_verify 00:27:37.061 ************************************ 00:27:37.061 00:27:37.061 real 0m6.808s 00:27:37.061 user 0m10.854s 00:27:37.061 sys 0m1.581s 00:27:37.061 04:49:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.061 04:49:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:37.061 04:49:44 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:37.061 04:49:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:27:37.061 04:49:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.061 04:49:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:37.061 ************************************ 00:27:37.061 START TEST bdev_verify_big_io 00:27:37.061 ************************************ 00:27:37.061 04:49:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:37.061 [2024-11-27 04:49:44.094886] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:37.061 [2024-11-27 04:49:44.095044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73102 ] 00:27:37.061 [2024-11-27 04:49:44.261299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:37.322 [2024-11-27 04:49:44.395665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.322 [2024-11-27 04:49:44.395766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.893 Running I/O for 5 seconds... 00:27:43.503 1890.00 IOPS, 118.12 MiB/s [2024-11-27T04:49:50.967Z] 2457.00 IOPS, 153.56 MiB/s [2024-11-27T04:49:51.533Z] 2654.00 IOPS, 165.88 MiB/s 00:27:44.330 Latency(us) 00:27:44.330 [2024-11-27T04:49:51.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.330 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.330 Verification LBA range: start 0x0 length 0x8000 00:27:44.331 nvme0n1 : 5.55 92.30 5.77 0.00 0.00 1350374.01 120989.54 1477685.56 00:27:44.331 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x8000 length 0x8000 00:27:44.331 nvme0n1 : 5.69 109.60 6.85 0.00 0.00 1122210.93 96791.63 1961643.72 00:27:44.331 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x0 length 0x8000 00:27:44.331 nvme0n2 : 5.84 101.40 6.34 0.00 0.00 1166665.13 48194.17 1568024.42 00:27:44.331 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x8000 length 0x8000 00:27:44.331 nvme0n2 : 5.71 148.40 9.27 0.00 0.00 828507.25 18047.61 1542213.32 00:27:44.331 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x0 length 0x8000 00:27:44.331 nvme0n3 : 5.84 109.54 6.85 0.00 0.00 1022434.72 5671.38 1529307.77 00:27:44.331 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x8000 length 0x8000 00:27:44.331 nvme0n3 : 5.72 131.54 8.22 0.00 0.00 909648.13 18955.03 1961643.72 00:27:44.331 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x0 length 0x2000 00:27:44.331 nvme1n1 : 5.89 143.90 8.99 0.00 0.00 750802.57 2659.25 1019538.51 00:27:44.331 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x2000 length 0x2000 00:27:44.331 nvme1n1 : 5.71 123.38 7.71 0.00 0.00 935494.32 95178.44 1780966.01 00:27:44.331 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x0 length 0xa000 00:27:44.331 nvme2n1 : 6.13 83.10 5.19 0.00 0.00 1256854.30 30852.33 3690987.52 00:27:44.331 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0xa000 length 0xa000 00:27:44.331 nvme2n1 : 5.93 129.42 8.09 0.00 0.00 849900.62 627.00 961463.53 00:27:44.331 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0x0 length 0xbd0b 00:27:44.331 nvme3n1 : 6.28 260.36 16.27 0.00 0.00 384847.92 2508.01 2361715.79 00:27:44.331 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.331 Verification LBA range: start 0xbd0b length 0xbd0b 00:27:44.331 nvme3n1 : 5.80 194.79 12.17 0.00 0.00 564909.32 2747.47 851766.35 00:27:44.331 [2024-11-27T04:49:51.534Z] =================================================================================================================== 00:27:44.331 [2024-11-27T04:49:51.534Z] Total : 1627.71 101.73 0.00 0.00 834389.20 627.00 3690987.52 00:27:45.267 00:27:45.267 real 0m8.113s 00:27:45.267 user 0m14.876s 00:27:45.267 sys 0m0.467s 00:27:45.267 04:49:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.267 ************************************ 00:27:45.267 END TEST bdev_verify_big_io 00:27:45.267 ************************************ 00:27:45.267 04:49:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:45.267 04:49:52 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.267 04:49:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:45.267 04:49:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.267 04:49:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:45.267 ************************************ 00:27:45.267 START TEST bdev_write_zeroes 00:27:45.267 ************************************ 00:27:45.267 04:49:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.267 [2024-11-27 04:49:52.241707] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:45.267 [2024-11-27 04:49:52.241825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73212 ] 00:27:45.267 [2024-11-27 04:49:52.400161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.525 [2024-11-27 04:49:52.498790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.783 Running I/O for 1 seconds... 00:27:46.722 78624.00 IOPS, 307.12 MiB/s 00:27:46.722 Latency(us) 00:27:46.722 [2024-11-27T04:49:53.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.722 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.722 nvme0n1 : 1.02 11334.15 44.27 0.00 0.00 11282.83 3856.54 21475.64 00:27:46.722 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.722 nvme0n2 : 1.02 11321.23 44.22 0.00 0.00 11287.69 3932.16 23290.49 00:27:46.722 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.722 nvme0n3 : 1.02 11307.58 44.17 0.00 0.00 11293.08 4032.98 23189.66 00:27:46.722 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.722 nvme1n1 : 1.02 11294.89 44.12 0.00 0.00 11297.78 6856.07 23088.84 00:27:46.722 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.722 nvme2n1 : 1.02 11279.93 44.06 0.00 0.00 11304.56 6856.07 22887.19 00:27:46.723 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.723 nvme3n1 : 1.02 21312.53 83.25 0.00 0.00 5975.96 2155.13 17745.13 00:27:46.723 [2024-11-27T04:49:53.926Z] =================================================================================================================== 00:27:46.723 [2024-11-27T04:49:53.926Z] Total : 77850.32 304.10 0.00 0.00 9833.68 2155.13 23290.49 00:27:47.666 ************************************ 00:27:47.666 END TEST bdev_write_zeroes 00:27:47.666 ************************************ 00:27:47.666 00:27:47.666 real 0m2.472s 00:27:47.666 user 0m1.694s 00:27:47.666 sys 0m0.576s 00:27:47.666 04:49:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.666 04:49:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:47.666 04:49:54 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.666 04:49:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:47.666 04:49:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.666 04:49:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:47.666 ************************************ 00:27:47.666 START TEST bdev_json_nonenclosed 00:27:47.666 ************************************ 00:27:47.666 04:49:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.666 [2024-11-27 04:49:54.782412] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:47.666 [2024-11-27 04:49:54.782531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 00:27:47.938 [2024-11-27 04:49:54.944389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.938 [2024-11-27 04:49:55.042248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.938 [2024-11-27 04:49:55.042325] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:47.938 [2024-11-27 04:49:55.042342] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:47.938 [2024-11-27 04:49:55.042350] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:48.221 00:27:48.221 real 0m0.497s 00:27:48.221 user 0m0.312s 00:27:48.221 sys 0m0.080s 00:27:48.221 04:49:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.221 ************************************ 00:27:48.221 END TEST bdev_json_nonenclosed 00:27:48.221 ************************************ 00:27:48.221 04:49:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:48.221 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:48.221 04:49:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:48.221 04:49:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.221 04:49:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:48.221 ************************************ 00:27:48.221 START TEST bdev_json_nonarray 00:27:48.221 ************************************ 00:27:48.221 04:49:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:48.221 [2024-11-27 04:49:55.338435] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:48.221 [2024-11-27 04:49:55.338571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73289 ] 00:27:48.482 [2024-11-27 04:49:55.503509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.482 [2024-11-27 04:49:55.602607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.482 [2024-11-27 04:49:55.602701] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:48.482 [2024-11-27 04:49:55.602718] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:48.482 [2024-11-27 04:49:55.602727] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:48.743 00:27:48.743 real 0m0.509s 00:27:48.743 user 0m0.311s 00:27:48.743 sys 0m0.094s 00:27:48.743 04:49:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.743 04:49:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:48.743 ************************************ 00:27:48.743 END TEST bdev_json_nonarray 00:27:48.743 ************************************ 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:27:48.743 04:49:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:49.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:54.602 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:54.602 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:54.602 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:55.989 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:55.989 00:27:55.989 real 0m57.367s 00:27:55.989 user 1m22.651s 00:27:55.989 sys 0m32.529s 00:27:55.989 ************************************ 00:27:55.989 END TEST blockdev_xnvme 00:27:55.989 ************************************ 00:27:55.989 04:50:02 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.989 04:50:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:55.989 04:50:02 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:27:55.989 04:50:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:55.989 04:50:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.989 04:50:02 -- common/autotest_common.sh@10 -- # set +x 00:27:55.989 ************************************ 00:27:55.989 START TEST ublk 00:27:55.989 ************************************ 00:27:55.989 04:50:02 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:27:55.989 * Looking for test storage... 00:27:55.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.989 04:50:03 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.989 04:50:03 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.989 04:50:03 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.989 04:50:03 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.989 04:50:03 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.989 04:50:03 ublk -- scripts/common.sh@344 -- # case "$op" in 00:27:55.989 04:50:03 ublk -- scripts/common.sh@345 -- # : 1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.989 04:50:03 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.989 04:50:03 ublk -- scripts/common.sh@365 -- # decimal 1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@353 -- # local d=1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.989 04:50:03 ublk -- scripts/common.sh@355 -- # echo 1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.989 04:50:03 ublk -- scripts/common.sh@366 -- # decimal 2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@353 -- # local d=2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.989 04:50:03 ublk -- scripts/common.sh@355 -- # echo 2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.989 04:50:03 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.989 04:50:03 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.989 04:50:03 ublk -- scripts/common.sh@368 -- # return 0 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.989 --rc genhtml_branch_coverage=1 00:27:55.989 --rc genhtml_function_coverage=1 00:27:55.989 --rc genhtml_legend=1 00:27:55.989 --rc geninfo_all_blocks=1 00:27:55.989 --rc geninfo_unexecuted_blocks=1 00:27:55.989 00:27:55.989 ' 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.989 --rc genhtml_branch_coverage=1 00:27:55.989 --rc genhtml_function_coverage=1 00:27:55.989 --rc genhtml_legend=1 00:27:55.989 --rc geninfo_all_blocks=1 00:27:55.989 --rc geninfo_unexecuted_blocks=1 00:27:55.989 00:27:55.989 ' 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.989 --rc genhtml_branch_coverage=1 00:27:55.989 --rc genhtml_function_coverage=1 00:27:55.989 --rc genhtml_legend=1 00:27:55.989 --rc geninfo_all_blocks=1 00:27:55.989 --rc geninfo_unexecuted_blocks=1 00:27:55.989 00:27:55.989 ' 00:27:55.989 04:50:03 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.989 --rc genhtml_branch_coverage=1 00:27:55.989 --rc genhtml_function_coverage=1 00:27:55.989 --rc genhtml_legend=1 00:27:55.989 --rc geninfo_all_blocks=1 00:27:55.990 --rc geninfo_unexecuted_blocks=1 00:27:55.990 00:27:55.990 ' 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:27:55.990 04:50:03 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:27:55.990 04:50:03 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:27:55.990 04:50:03 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:27:55.990 04:50:03 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:27:55.990 04:50:03 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:27:55.990 04:50:03 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:27:55.990 04:50:03 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:27:55.990 04:50:03 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:27:55.990 04:50:03 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:27:55.990 04:50:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:55.990 04:50:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.990 04:50:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:55.990 ************************************ 00:27:55.990 START TEST test_save_ublk_config 00:27:55.990 ************************************ 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73600 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73600 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73600 ']' 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.990 04:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:56.250 [2024-11-27 04:50:03.252416] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:27:56.250 [2024-11-27 04:50:03.252535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73600 ] 00:27:56.250 [2024-11-27 04:50:03.413456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.511 [2024-11-27 04:50:03.511823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:57.079 [2024-11-27 04:50:04.133087] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:57.079 [2024-11-27 04:50:04.134002] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:57.079 malloc0 00:27:57.079 [2024-11-27 04:50:04.197200] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:27:57.079 [2024-11-27 04:50:04.197291] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:27:57.079 [2024-11-27 04:50:04.197308] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:27:57.079 [2024-11-27 04:50:04.197318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:27:57.079 [2024-11-27 04:50:04.206152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:57.079 [2024-11-27 04:50:04.206177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:57.079 [2024-11-27 04:50:04.213094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:57.079 [2024-11-27 04:50:04.213216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:27:57.079 [2024-11-27 04:50:04.230091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:27:57.079 0 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.079 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:57.342 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.342 04:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:27:57.342 "subsystems": [ 00:27:57.342 { 00:27:57.342 "subsystem": "fsdev", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "fsdev_set_opts", 00:27:57.342 "params": { 00:27:57.342 "fsdev_io_pool_size": 65535, 00:27:57.342 "fsdev_io_cache_size": 256 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "keyring", 00:27:57.342 "config": [] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "iobuf", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "iobuf_set_options", 00:27:57.342 "params": { 00:27:57.342 "small_pool_count": 8192, 00:27:57.342 "large_pool_count": 1024, 00:27:57.342 "small_bufsize": 8192, 00:27:57.342 "large_bufsize": 135168, 00:27:57.342 "enable_numa": false 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "sock", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "sock_set_default_impl", 00:27:57.342 "params": { 00:27:57.342 "impl_name": "posix" 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "sock_impl_set_options", 00:27:57.342 "params": { 00:27:57.342 "impl_name": "ssl", 00:27:57.342 "recv_buf_size": 4096, 00:27:57.342 "send_buf_size": 4096, 00:27:57.342 "enable_recv_pipe": true, 00:27:57.342 "enable_quickack": false, 00:27:57.342 "enable_placement_id": 0, 00:27:57.342 "enable_zerocopy_send_server": true, 00:27:57.342 "enable_zerocopy_send_client": false, 00:27:57.342 "zerocopy_threshold": 0, 00:27:57.342 "tls_version": 0, 00:27:57.342 "enable_ktls": false 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "sock_impl_set_options", 00:27:57.342 "params": { 00:27:57.342 "impl_name": "posix", 00:27:57.342 "recv_buf_size": 2097152, 00:27:57.342 "send_buf_size": 2097152, 00:27:57.342 "enable_recv_pipe": true, 00:27:57.342 "enable_quickack": false, 00:27:57.342 "enable_placement_id": 0, 00:27:57.342 "enable_zerocopy_send_server": true, 00:27:57.342 "enable_zerocopy_send_client": false, 00:27:57.342 "zerocopy_threshold": 0, 00:27:57.342 "tls_version": 0, 00:27:57.342 "enable_ktls": false 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "vmd", 00:27:57.342 "config": [] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "accel", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "accel_set_options", 00:27:57.342 "params": { 00:27:57.342 "small_cache_size": 128, 00:27:57.342 "large_cache_size": 16, 00:27:57.342 "task_count": 2048, 00:27:57.342 "sequence_count": 2048, 00:27:57.342 "buf_count": 2048 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "bdev", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "bdev_set_options", 00:27:57.342 "params": { 00:27:57.342 "bdev_io_pool_size": 65535, 00:27:57.342 "bdev_io_cache_size": 256, 00:27:57.342 "bdev_auto_examine": true, 00:27:57.342 "iobuf_small_cache_size": 128, 00:27:57.342 "iobuf_large_cache_size": 16 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_raid_set_options", 00:27:57.342 "params": { 00:27:57.342 "process_window_size_kb": 1024, 00:27:57.342 "process_max_bandwidth_mb_sec": 0 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_iscsi_set_options", 00:27:57.342 "params": { 00:27:57.342 "timeout_sec": 30 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_nvme_set_options", 00:27:57.342 "params": { 00:27:57.342 "action_on_timeout": "none", 00:27:57.342 "timeout_us": 0, 00:27:57.342 "timeout_admin_us": 0, 00:27:57.342 "keep_alive_timeout_ms": 10000, 00:27:57.342 "arbitration_burst": 0, 00:27:57.342 "low_priority_weight": 0, 00:27:57.342 "medium_priority_weight": 0, 00:27:57.342 "high_priority_weight": 0, 00:27:57.342 "nvme_adminq_poll_period_us": 10000, 00:27:57.342 "nvme_ioq_poll_period_us": 0, 00:27:57.342 "io_queue_requests": 0, 00:27:57.342 "delay_cmd_submit": true, 00:27:57.342 "transport_retry_count": 4, 00:27:57.342 "bdev_retry_count": 3, 00:27:57.342 "transport_ack_timeout": 0, 00:27:57.342 "ctrlr_loss_timeout_sec": 0, 00:27:57.342 "reconnect_delay_sec": 0, 00:27:57.342 "fast_io_fail_timeout_sec": 0, 00:27:57.342 "disable_auto_failback": false, 00:27:57.342 "generate_uuids": false, 00:27:57.342 "transport_tos": 0, 00:27:57.342 "nvme_error_stat": false, 00:27:57.342 "rdma_srq_size": 0, 00:27:57.342 "io_path_stat": false, 00:27:57.342 "allow_accel_sequence": false, 00:27:57.342 "rdma_max_cq_size": 0, 00:27:57.342 "rdma_cm_event_timeout_ms": 0, 00:27:57.342 "dhchap_digests": [ 00:27:57.342 "sha256", 00:27:57.342 "sha384", 00:27:57.342 "sha512" 00:27:57.342 ], 00:27:57.342 "dhchap_dhgroups": [ 00:27:57.342 "null", 00:27:57.342 "ffdhe2048", 00:27:57.342 "ffdhe3072", 00:27:57.342 "ffdhe4096", 00:27:57.342 "ffdhe6144", 00:27:57.342 "ffdhe8192" 00:27:57.342 ] 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_nvme_set_hotplug", 00:27:57.342 "params": { 00:27:57.342 "period_us": 100000, 00:27:57.342 "enable": false 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_malloc_create", 00:27:57.342 "params": { 00:27:57.342 "name": "malloc0", 00:27:57.342 "num_blocks": 8192, 00:27:57.342 "block_size": 4096, 00:27:57.342 "physical_block_size": 4096, 00:27:57.342 "uuid": "4c522e3e-a1af-48d8-8427-83f3e4477992", 00:27:57.342 "optimal_io_boundary": 0, 00:27:57.342 "md_size": 0, 00:27:57.342 "dif_type": 0, 00:27:57.342 "dif_is_head_of_md": false, 00:27:57.342 "dif_pi_format": 0 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "bdev_wait_for_examine" 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "scsi", 00:27:57.342 "config": null 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "scheduler", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "framework_set_scheduler", 00:27:57.342 "params": { 00:27:57.342 "name": "static" 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "vhost_scsi", 00:27:57.342 "config": [] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "vhost_blk", 00:27:57.342 "config": [] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "ublk", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "ublk_create_target", 00:27:57.342 "params": { 00:27:57.342 "cpumask": "1" 00:27:57.342 } 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "method": "ublk_start_disk", 00:27:57.342 "params": { 00:27:57.342 "bdev_name": "malloc0", 00:27:57.342 "ublk_id": 0, 00:27:57.342 "num_queues": 1, 00:27:57.342 "queue_depth": 128 00:27:57.342 } 00:27:57.342 } 00:27:57.342 ] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "nbd", 00:27:57.342 "config": [] 00:27:57.342 }, 00:27:57.342 { 00:27:57.342 "subsystem": "nvmf", 00:27:57.342 "config": [ 00:27:57.342 { 00:27:57.342 "method": "nvmf_set_config", 00:27:57.342 "params": { 00:27:57.342 "discovery_filter": "match_any", 00:27:57.342 "admin_cmd_passthru": { 00:27:57.342 "identify_ctrlr": false 00:27:57.342 }, 00:27:57.342 "dhchap_digests": [ 00:27:57.342 "sha256", 00:27:57.342 "sha384", 00:27:57.342 "sha512" 00:27:57.342 ], 00:27:57.343 "dhchap_dhgroups": [ 00:27:57.343 "null", 00:27:57.343 "ffdhe2048", 00:27:57.343 "ffdhe3072", 00:27:57.343 "ffdhe4096", 00:27:57.343 "ffdhe6144", 00:27:57.343 "ffdhe8192" 00:27:57.343 ] 00:27:57.343 } 00:27:57.343 }, 00:27:57.343 { 00:27:57.343 "method": "nvmf_set_max_subsystems", 00:27:57.343 "params": { 00:27:57.343 "max_subsystems": 1024 00:27:57.343 } 00:27:57.343 }, 00:27:57.343 { 00:27:57.343 "method": "nvmf_set_crdt", 00:27:57.343 "params": { 00:27:57.343 "crdt1": 0, 00:27:57.343 "crdt2": 0, 00:27:57.343 "crdt3": 0 00:27:57.343 } 00:27:57.343 } 00:27:57.343 ] 00:27:57.343 }, 00:27:57.343 { 00:27:57.343 "subsystem": "iscsi", 00:27:57.343 "config": [ 00:27:57.343 { 00:27:57.343 "method": "iscsi_set_options", 00:27:57.343 "params": { 00:27:57.343 "node_base": "iqn.2016-06.io.spdk", 00:27:57.343 "max_sessions": 128, 00:27:57.343 "max_connections_per_session": 2, 00:27:57.343 "max_queue_depth": 64, 00:27:57.343 "default_time2wait": 2, 00:27:57.343 "default_time2retain": 20, 00:27:57.343 "first_burst_length": 8192, 00:27:57.343 "immediate_data": true, 00:27:57.343 "allow_duplicated_isid": false, 00:27:57.343 "error_recovery_level": 0, 00:27:57.343 "nop_timeout": 60, 00:27:57.343 "nop_in_interval": 30, 00:27:57.343 "disable_chap": false, 00:27:57.343 "require_chap": false, 00:27:57.343 "mutual_chap": false, 00:27:57.343 "chap_group": 0, 00:27:57.343 "max_large_datain_per_connection": 64, 00:27:57.343 "max_r2t_per_connection": 4, 00:27:57.343 "pdu_pool_size": 36864, 00:27:57.343 "immediate_data_pool_size": 16384, 00:27:57.343 "data_out_pool_size": 2048 00:27:57.343 } 00:27:57.343 } 00:27:57.343 ] 00:27:57.343 } 00:27:57.343 ] 00:27:57.343 }' 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73600 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73600 ']' 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73600 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.343 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73600 00:27:57.604 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.604 killing process with pid 73600 00:27:57.604 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.604 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73600' 00:27:57.604 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73600 00:27:57.604 04:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73600 00:27:58.545 [2024-11-27 04:50:05.590101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:27:58.545 [2024-11-27 04:50:05.635106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:58.545 [2024-11-27 04:50:05.635265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:27:58.545 [2024-11-27 04:50:05.646092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:58.545 [2024-11-27 04:50:05.646165] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:27:58.545 [2024-11-27 04:50:05.646184] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:27:58.545 [2024-11-27 04:50:05.646212] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:58.545 [2024-11-27 04:50:05.646373] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73662 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73662 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73662 ']' 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:00.446 04:50:07 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:28:00.446 "subsystems": [ 00:28:00.446 { 00:28:00.446 "subsystem": "fsdev", 00:28:00.446 "config": [ 00:28:00.446 { 00:28:00.446 "method": "fsdev_set_opts", 00:28:00.446 "params": { 00:28:00.446 "fsdev_io_pool_size": 65535, 00:28:00.446 "fsdev_io_cache_size": 256 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "keyring", 00:28:00.446 "config": [] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "iobuf", 00:28:00.446 "config": [ 00:28:00.446 { 00:28:00.446 "method": "iobuf_set_options", 00:28:00.446 "params": { 00:28:00.446 "small_pool_count": 8192, 00:28:00.446 "large_pool_count": 1024, 00:28:00.446 "small_bufsize": 8192, 00:28:00.446 "large_bufsize": 135168, 00:28:00.446 "enable_numa": false 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "sock", 00:28:00.446 "config": [ 00:28:00.446 { 00:28:00.446 "method": "sock_set_default_impl", 00:28:00.446 "params": { 00:28:00.446 "impl_name": "posix" 00:28:00.446 } 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "method": "sock_impl_set_options", 00:28:00.446 "params": { 00:28:00.446 "impl_name": "ssl", 00:28:00.446 "recv_buf_size": 4096, 00:28:00.446 "send_buf_size": 4096, 00:28:00.446 "enable_recv_pipe": true, 00:28:00.446 "enable_quickack": false, 00:28:00.446 "enable_placement_id": 0, 00:28:00.446 "enable_zerocopy_send_server": true, 00:28:00.446 "enable_zerocopy_send_client": false, 00:28:00.446 "zerocopy_threshold": 0, 00:28:00.446 "tls_version": 0, 00:28:00.446 "enable_ktls": false 00:28:00.446 } 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "method": "sock_impl_set_options", 00:28:00.446 "params": { 00:28:00.446 "impl_name": "posix", 00:28:00.446 "recv_buf_size": 2097152, 00:28:00.446 "send_buf_size": 2097152, 00:28:00.446 "enable_recv_pipe": true, 00:28:00.446 "enable_quickack": false, 00:28:00.446 "enable_placement_id": 0, 00:28:00.446 "enable_zerocopy_send_server": true, 00:28:00.446 "enable_zerocopy_send_client": false, 00:28:00.446 "zerocopy_threshold": 0, 00:28:00.446 "tls_version": 0, 00:28:00.446 "enable_ktls": false 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "vmd", 00:28:00.446 "config": [] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "accel", 00:28:00.446 "config": [ 00:28:00.446 { 00:28:00.446 "method": "accel_set_options", 00:28:00.446 "params": { 00:28:00.446 "small_cache_size": 128, 00:28:00.446 "large_cache_size": 16, 00:28:00.446 "task_count": 2048, 00:28:00.446 "sequence_count": 2048, 00:28:00.446 "buf_count": 2048 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ] 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "subsystem": "bdev", 00:28:00.446 "config": [ 00:28:00.446 { 00:28:00.446 "method": "bdev_set_options", 00:28:00.446 "params": { 00:28:00.446 "bdev_io_pool_size": 65535, 00:28:00.446 "bdev_io_cache_size": 256, 00:28:00.446 "bdev_auto_examine": true, 00:28:00.446 "iobuf_small_cache_size": 128, 00:28:00.446 "iobuf_large_cache_size": 16 00:28:00.446 } 00:28:00.446 }, 00:28:00.446 { 00:28:00.446 "method": "bdev_raid_set_options", 00:28:00.446 "params": { 00:28:00.446 "process_window_size_kb": 1024, 00:28:00.446 "process_max_bandwidth_mb_sec": 0 00:28:00.446 } 00:28:00.446 }, 00:28:00.446 { 00:28:00.447 "method": "bdev_iscsi_set_options", 00:28:00.447 "params": { 00:28:00.447 "timeout_sec": 30 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "bdev_nvme_set_options", 00:28:00.447 "params": { 00:28:00.447 "action_on_timeout": "none", 00:28:00.447 "timeout_us": 0, 00:28:00.447 "timeout_admin_us": 0, 00:28:00.447 "keep_alive_timeout_ms": 10000, 00:28:00.447 "arbitration_burst": 0, 00:28:00.447 "low_priority_weight": 0, 00:28:00.447 "medium_priority_weight": 0, 00:28:00.447 "high_priority_weight": 0, 00:28:00.447 "nvme_adminq_poll_period_us": 10000, 00:28:00.447 "nvme_ioq_poll_period_us": 0, 00:28:00.447 "io_queue_requests": 0, 00:28:00.447 "delay_cmd_submit": true, 00:28:00.447 "transport_retry_count": 4, 00:28:00.447 "bdev_retry_count": 3, 00:28:00.447 "transport_ack_timeout": 0, 00:28:00.447 "ctrlr_loss_timeout_sec": 0, 00:28:00.447 "reconnect_delay_sec": 0, 00:28:00.447 "fast_io_fail_timeout_sec": 0, 00:28:00.447 "disable_auto_failback": false, 00:28:00.447 "generate_uuids": false, 00:28:00.447 "transport_tos": 0, 00:28:00.447 "nvme_error_stat": false, 00:28:00.447 "rdma_srq_size": 0, 00:28:00.447 "io_path_stat": false, 00:28:00.447 "allow_accel_sequence": false, 00:28:00.447 "rdma_max_cq_size": 0, 00:28:00.447 "rdma_cm_event_timeout_ms": 0, 00:28:00.447 "dhchap_digests": [ 00:28:00.447 "sha256", 00:28:00.447 "sha384", 00:28:00.447 "sha512" 00:28:00.447 ], 00:28:00.447 "dhchap_dhgroups": [ 00:28:00.447 "null", 00:28:00.447 "ffdhe2048", 00:28:00.447 "ffdhe3072", 00:28:00.447 "ffdhe4096", 00:28:00.447 "ffdhe6144", 00:28:00.447 "ffdhe8192" 00:28:00.447 ] 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "bdev_nvme_set_hotplug", 00:28:00.447 "params": { 00:28:00.447 "period_us": 100000, 00:28:00.447 "enable": false 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "bdev_malloc_create", 00:28:00.447 "params": { 00:28:00.447 "name": "malloc0", 00:28:00.447 "num_blocks": 8192, 00:28:00.447 "block_size": 4096, 00:28:00.447 "physical_block_size": 4096, 00:28:00.447 "uuid": "4c522e3e-a1af-48d8-8427-83f3e4477992", 00:28:00.447 "optimal_io_boundary": 0, 00:28:00.447 "md_size": 0, 00:28:00.447 "dif_type": 0, 00:28:00.447 "dif_is_head_of_md": false, 00:28:00.447 "dif_pi_format": 0 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "bdev_wait_for_examine" 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "scsi", 00:28:00.447 "config": null 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "scheduler", 00:28:00.447 "config": [ 00:28:00.447 { 00:28:00.447 "method": "framework_set_scheduler", 00:28:00.447 "params": { 00:28:00.447 "name": "static" 00:28:00.447 } 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "vhost_scsi", 00:28:00.447 "config": [] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "vhost_blk", 00:28:00.447 "config": [] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "ublk", 00:28:00.447 "config": [ 00:28:00.447 { 00:28:00.447 "method": "ublk_create_target", 00:28:00.447 "params": { 00:28:00.447 "cpumask": "1" 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "ublk_start_disk", 00:28:00.447 "params": { 00:28:00.447 "bdev_name": "malloc0", 00:28:00.447 "ublk_id": 0, 00:28:00.447 "num_queues": 1, 00:28:00.447 "queue_depth": 128 00:28:00.447 } 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "nbd", 00:28:00.447 "config": [] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "nvmf", 00:28:00.447 "config": [ 00:28:00.447 { 00:28:00.447 "method": "nvmf_set_config", 00:28:00.447 "params": { 00:28:00.447 "discovery_filter": "match_any", 00:28:00.447 "admin_cmd_passthru": { 00:28:00.447 "identify_ctrlr": false 00:28:00.447 }, 00:28:00.447 "dhchap_digests": [ 00:28:00.447 "sha256", 00:28:00.447 "sha384", 00:28:00.447 "sha512" 00:28:00.447 ], 00:28:00.447 "dhchap_dhgroups": [ 00:28:00.447 "null", 00:28:00.447 "ffdhe2048", 00:28:00.447 "ffdhe3072", 00:28:00.447 "ffdhe4096", 00:28:00.447 "ffdhe6144", 00:28:00.447 "ffdhe8192" 00:28:00.447 ] 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "nvmf_set_max_subsystems", 00:28:00.447 "params": { 00:28:00.447 "max_subsystems": 1024 00:28:00.447 } 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "method": "nvmf_set_crdt", 00:28:00.447 "params": { 00:28:00.447 "crdt1": 0, 00:28:00.447 "crdt2": 0, 00:28:00.447 "crdt3": 0 00:28:00.447 } 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 }, 00:28:00.447 { 00:28:00.447 "subsystem": "iscsi", 00:28:00.447 "config": [ 00:28:00.447 { 00:28:00.447 "method": "iscsi_set_options", 00:28:00.447 "params": { 00:28:00.447 "node_base": "iqn.2016-06.io.spdk", 00:28:00.447 "max_sessions": 128, 00:28:00.447 "max_connections_per_session": 2, 00:28:00.447 "max_queue_depth": 64, 00:28:00.447 "default_time2wait": 2, 00:28:00.447 "default_time2retain": 20, 00:28:00.447 "first_burst_length": 8192, 00:28:00.447 "immediate_data": true, 00:28:00.447 "allow_duplicated_isid": false, 00:28:00.447 "error_recovery_level": 0, 00:28:00.447 "nop_timeout": 60, 00:28:00.447 "nop_in_interval": 30, 00:28:00.447 "disable_chap": false, 00:28:00.447 "require_chap": false, 00:28:00.447 "mutual_chap": false, 00:28:00.447 "chap_group": 0, 00:28:00.447 "max_large_datain_per_connection": 64, 00:28:00.447 "max_r2t_per_connection": 4, 00:28:00.447 "pdu_pool_size": 36864, 00:28:00.447 "immediate_data_pool_size": 16384, 00:28:00.447 "data_out_pool_size": 2048 00:28:00.447 } 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 } 00:28:00.447 ] 00:28:00.447 }' 00:28:00.447 04:50:07 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:28:00.447 [2024-11-27 04:50:07.440940] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:28:00.447 [2024-11-27 04:50:07.441059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73662 ] 00:28:00.447 [2024-11-27 04:50:07.598828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.705 [2024-11-27 04:50:07.695658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.272 [2024-11-27 04:50:08.469092] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:01.272 [2024-11-27 04:50:08.470000] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:01.530 [2024-11-27 04:50:08.477203] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:01.530 [2024-11-27 04:50:08.477283] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:01.530 [2024-11-27 04:50:08.477299] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:01.530 [2024-11-27 04:50:08.477309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:01.530 [2024-11-27 04:50:08.486148] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:01.530 [2024-11-27 04:50:08.486171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:01.530 [2024-11-27 04:50:08.493094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:01.530 [2024-11-27 04:50:08.493203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:01.530 [2024-11-27 04:50:08.510084] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:01.530 04:50:08 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73662 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73662 ']' 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73662 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73662 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.531 killing process with pid 73662 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73662' 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73662 00:28:01.531 04:50:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73662 00:28:02.903 [2024-11-27 04:50:09.765935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:02.903 [2024-11-27 04:50:09.798177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:02.903 [2024-11-27 04:50:09.798306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:02.903 [2024-11-27 04:50:09.805098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:02.903 [2024-11-27 04:50:09.805151] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:02.903 [2024-11-27 04:50:09.805161] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:02.903 [2024-11-27 04:50:09.805189] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:02.903 [2024-11-27 04:50:09.805333] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:04.276 04:50:11 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:28:04.276 00:28:04.276 real 0m8.004s 00:28:04.276 user 0m5.481s 00:28:04.276 sys 0m3.116s 00:28:04.276 04:50:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.276 04:50:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:04.276 ************************************ 00:28:04.276 END TEST test_save_ublk_config 00:28:04.276 ************************************ 00:28:04.276 04:50:11 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73736 00:28:04.276 04:50:11 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:04.276 04:50:11 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73736 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@835 -- # '[' -z 73736 ']' 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.276 04:50:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:04.276 04:50:11 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:04.276 [2024-11-27 04:50:11.290982] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:28:04.276 [2024-11-27 04:50:11.291117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73736 ] 00:28:04.276 [2024-11-27 04:50:11.448326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:04.534 [2024-11-27 04:50:11.537449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.534 [2024-11-27 04:50:11.537462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.100 04:50:12 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.100 04:50:12 ublk -- common/autotest_common.sh@868 -- # return 0 00:28:05.100 04:50:12 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:28:05.100 04:50:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.100 04:50:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.100 04:50:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 ************************************ 00:28:05.100 START TEST test_create_ublk 00:28:05.100 ************************************ 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:28:05.100 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 [2024-11-27 04:50:12.131086] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:05.100 [2024-11-27 04:50:12.132802] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.100 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:28:05.100 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.100 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:05.358 [2024-11-27 04:50:12.313208] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:28:05.358 [2024-11-27 04:50:12.313560] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:28:05.358 [2024-11-27 04:50:12.313575] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:05.358 [2024-11-27 04:50:12.313581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:05.358 [2024-11-27 04:50:12.322329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:05.358 [2024-11-27 04:50:12.322349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:05.358 [2024-11-27 04:50:12.329094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:05.358 [2024-11-27 04:50:12.329624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:05.358 [2024-11-27 04:50:12.344097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:05.358 04:50:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:28:05.358 { 00:28:05.358 "ublk_device": "/dev/ublkb0", 00:28:05.358 "id": 0, 00:28:05.358 "queue_depth": 512, 00:28:05.358 "num_queues": 4, 00:28:05.358 "bdev_name": "Malloc0" 00:28:05.358 } 00:28:05.358 ]' 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:28:05.358 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:28:05.359 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:28:05.359 04:50:12 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:28:05.359 04:50:12 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:28:05.617 fio: verification read phase will never start because write phase uses all of runtime 00:28:05.617 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:28:05.617 fio-3.35 00:28:05.617 Starting 1 process 00:28:15.628 00:28:15.628 fio_test: (groupid=0, jobs=1): err= 0: pid=73777: Wed Nov 27 04:50:22 2024 00:28:15.628 write: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(538MiB/10001msec); 0 zone resets 00:28:15.628 clat (usec): min=42, max=3951, avg=71.77, stdev=93.88 00:28:15.628 lat (usec): min=42, max=3951, avg=72.24, stdev=93.90 00:28:15.628 clat percentiles (usec): 00:28:15.628 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 63], 00:28:15.628 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:28:15.628 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 81], 00:28:15.628 | 99.00th=[ 92], 99.50th=[ 106], 99.90th=[ 1811], 99.95th=[ 2671], 00:28:15.628 | 99.99th=[ 3621] 00:28:15.628 bw ( KiB/s): min=53464, max=59376, per=100.00%, avg=55163.79, stdev=1401.11, samples=19 00:28:15.628 iops : min=13366, max=14844, avg=13790.95, stdev=350.28, samples=19 00:28:15.628 lat (usec) : 50=0.60%, 100=98.81%, 250=0.36%, 500=0.05%, 750=0.01% 00:28:15.628 lat (usec) : 1000=0.02% 00:28:15.628 lat (msec) : 2=0.06%, 4=0.09% 00:28:15.628 cpu : usr=2.13%, sys=13.74%, ctx=137776, majf=0, minf=795 00:28:15.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.628 issued rwts: total=0,137784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:15.628 00:28:15.628 Run status group 0 (all jobs): 00:28:15.629 WRITE: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=538MiB (564MB), run=10001-10001msec 00:28:15.629 00:28:15.629 Disk stats (read/write): 00:28:15.629 ublkb0: ios=0/136371, merge=0/0, ticks=0/8170, in_queue=8170, util=99.08% 00:28:15.629 04:50:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:15.629 [2024-11-27 04:50:22.746234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:15.629 [2024-11-27 04:50:22.775674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:15.629 [2024-11-27 04:50:22.776574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:15.629 [2024-11-27 04:50:22.783106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:15.629 [2024-11-27 04:50:22.783371] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:15.629 [2024-11-27 04:50:22.783386] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.629 04:50:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:15.629 [2024-11-27 04:50:22.799150] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:28:15.629 request: 00:28:15.629 { 00:28:15.629 "ublk_id": 0, 00:28:15.629 "method": "ublk_stop_disk", 00:28:15.629 "req_id": 1 00:28:15.629 } 00:28:15.629 Got JSON-RPC error response 00:28:15.629 response: 00:28:15.629 { 00:28:15.629 "code": -19, 00:28:15.629 "message": "No such device" 00:28:15.629 } 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:15.629 04:50:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:15.629 [2024-11-27 04:50:22.815156] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:15.629 [2024-11-27 04:50:22.823085] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:15.629 [2024-11-27 04:50:22.823116] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.629 04:50:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.629 04:50:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.197 04:50:23 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:28:16.197 04:50:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:28:16.197 00:28:16.197 real 0m11.175s 00:28:16.197 user 0m0.510s 00:28:16.197 sys 0m1.445s 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.197 04:50:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 ************************************ 00:28:16.197 END TEST test_create_ublk 00:28:16.197 ************************************ 00:28:16.197 04:50:23 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:28:16.197 04:50:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.197 04:50:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.197 04:50:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 ************************************ 00:28:16.197 START TEST test_create_multi_ublk 00:28:16.197 ************************************ 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.197 [2024-11-27 04:50:23.347082] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:16.197 [2024-11-27 04:50:23.348803] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.197 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.456 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.456 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:28:16.456 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:28:16.456 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.456 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.456 [2024-11-27 04:50:23.587217] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:28:16.456 [2024-11-27 04:50:23.587561] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:28:16.456 [2024-11-27 04:50:23.587574] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:16.456 [2024-11-27 04:50:23.587583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:16.456 [2024-11-27 04:50:23.611091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:16.456 [2024-11-27 04:50:23.611113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:16.456 [2024-11-27 04:50:23.623089] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:16.456 [2024-11-27 04:50:23.623641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:16.714 [2024-11-27 04:50:23.663093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.714 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:16.978 [2024-11-27 04:50:23.922208] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:28:16.978 [2024-11-27 04:50:23.922535] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:28:16.978 [2024-11-27 04:50:23.922548] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:16.978 [2024-11-27 04:50:23.922554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:28:16.978 [2024-11-27 04:50:23.934115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:16.978 [2024-11-27 04:50:23.934134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:16.978 [2024-11-27 04:50:23.946093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:16.978 [2024-11-27 04:50:23.946627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:28:16.978 [2024-11-27 04:50:23.982091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.978 04:50:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:17.274 [2024-11-27 04:50:24.245185] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:28:17.274 [2024-11-27 04:50:24.245530] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:28:17.274 [2024-11-27 04:50:24.245542] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:28:17.274 [2024-11-27 04:50:24.245550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:28:17.274 [2024-11-27 04:50:24.257099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:17.274 [2024-11-27 04:50:24.257121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:17.274 [2024-11-27 04:50:24.269092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:17.274 [2024-11-27 04:50:24.269646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:28:17.274 [2024-11-27 04:50:24.282127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.274 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:17.533 [2024-11-27 04:50:24.537216] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:28:17.533 [2024-11-27 04:50:24.537549] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:28:17.533 [2024-11-27 04:50:24.537563] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:28:17.533 [2024-11-27 04:50:24.537568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:28:17.533 [2024-11-27 04:50:24.545102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:17.533 [2024-11-27 04:50:24.545119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:17.533 [2024-11-27 04:50:24.553094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:17.533 [2024-11-27 04:50:24.553630] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:28:17.533 [2024-11-27 04:50:24.560118] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:28:17.533 { 00:28:17.533 "ublk_device": "/dev/ublkb0", 00:28:17.533 "id": 0, 00:28:17.533 "queue_depth": 512, 00:28:17.533 "num_queues": 4, 00:28:17.533 "bdev_name": "Malloc0" 00:28:17.533 }, 00:28:17.533 { 00:28:17.533 "ublk_device": "/dev/ublkb1", 00:28:17.533 "id": 1, 00:28:17.533 "queue_depth": 512, 00:28:17.533 "num_queues": 4, 00:28:17.533 "bdev_name": "Malloc1" 00:28:17.533 }, 00:28:17.533 { 00:28:17.533 "ublk_device": "/dev/ublkb2", 00:28:17.533 "id": 2, 00:28:17.533 "queue_depth": 512, 00:28:17.533 "num_queues": 4, 00:28:17.533 "bdev_name": "Malloc2" 00:28:17.533 }, 00:28:17.533 { 00:28:17.533 "ublk_device": "/dev/ublkb3", 00:28:17.533 "id": 3, 00:28:17.533 "queue_depth": 512, 00:28:17.533 "num_queues": 4, 00:28:17.533 "bdev_name": "Malloc3" 00:28:17.533 } 00:28:17.533 ]' 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:17.533 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:17.534 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:28:17.792 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:28:17.793 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:18.050 04:50:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:18.050 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.051 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:18.051 [2024-11-27 04:50:25.241170] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:18.316 [2024-11-27 04:50:25.281130] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:18.316 [2024-11-27 04:50:25.281968] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:18.316 [2024-11-27 04:50:25.290140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:18.316 [2024-11-27 04:50:25.290393] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:18.316 [2024-11-27 04:50:25.290408] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:18.316 [2024-11-27 04:50:25.305150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:28:18.316 [2024-11-27 04:50:25.358126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:18.316 [2024-11-27 04:50:25.358901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:28:18.316 [2024-11-27 04:50:25.370120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:18.316 [2024-11-27 04:50:25.370368] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:28:18.316 [2024-11-27 04:50:25.370382] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:18.316 [2024-11-27 04:50:25.385183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:28:18.316 [2024-11-27 04:50:25.429126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:18.316 [2024-11-27 04:50:25.429842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:28:18.316 [2024-11-27 04:50:25.435726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:18.316 [2024-11-27 04:50:25.435985] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:28:18.316 [2024-11-27 04:50:25.436000] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:18.316 [2024-11-27 04:50:25.453162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:28:18.316 [2024-11-27 04:50:25.484131] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:18.316 [2024-11-27 04:50:25.484768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:28:18.316 [2024-11-27 04:50:25.492093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:18.316 [2024-11-27 04:50:25.492334] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:28:18.316 [2024-11-27 04:50:25.492347] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.316 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:28:18.576 [2024-11-27 04:50:25.684143] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:18.576 [2024-11-27 04:50:25.692082] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:18.576 [2024-11-27 04:50:25.692110] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:28:18.576 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:28:18.576 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:18.576 04:50:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:18.576 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.576 04:50:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.142 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.142 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:19.142 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:19.142 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.142 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.401 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.401 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:19.401 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:28:19.401 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.401 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:28:19.659 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:28:19.917 00:28:19.917 real 0m3.591s 00:28:19.917 user 0m0.827s 00:28:19.917 sys 0m0.157s 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.917 04:50:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:19.917 ************************************ 00:28:19.917 END TEST test_create_multi_ublk 00:28:19.917 ************************************ 00:28:19.917 04:50:26 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:19.917 04:50:26 ublk -- ublk/ublk.sh@147 -- # cleanup 00:28:19.917 04:50:26 ublk -- ublk/ublk.sh@130 -- # killprocess 73736 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@954 -- # '[' -z 73736 ']' 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@958 -- # kill -0 73736 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@959 -- # uname 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73736 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73736' 00:28:19.917 killing process with pid 73736 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@973 -- # kill 73736 00:28:19.917 04:50:26 ublk -- common/autotest_common.sh@978 -- # wait 73736 00:28:20.485 [2024-11-27 04:50:27.567501] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:20.485 [2024-11-27 04:50:27.567556] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:21.053 00:28:21.053 real 0m25.244s 00:28:21.053 user 0m35.613s 00:28:21.053 sys 0m9.865s 00:28:21.053 04:50:28 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.053 ************************************ 00:28:21.053 END TEST ublk 00:28:21.053 ************************************ 00:28:21.053 04:50:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:21.313 04:50:28 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:28:21.313 04:50:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.313 04:50:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.313 04:50:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.313 ************************************ 00:28:21.313 START TEST ublk_recovery 00:28:21.313 ************************************ 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:28:21.313 * Looking for test storage... 00:28:21.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.313 04:50:28 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.313 --rc genhtml_branch_coverage=1 00:28:21.313 --rc genhtml_function_coverage=1 00:28:21.313 --rc genhtml_legend=1 00:28:21.313 --rc geninfo_all_blocks=1 00:28:21.313 --rc geninfo_unexecuted_blocks=1 00:28:21.313 00:28:21.313 ' 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.313 --rc genhtml_branch_coverage=1 00:28:21.313 --rc genhtml_function_coverage=1 00:28:21.313 --rc genhtml_legend=1 00:28:21.313 --rc geninfo_all_blocks=1 00:28:21.313 --rc geninfo_unexecuted_blocks=1 00:28:21.313 00:28:21.313 ' 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.313 --rc genhtml_branch_coverage=1 00:28:21.313 --rc genhtml_function_coverage=1 00:28:21.313 --rc genhtml_legend=1 00:28:21.313 --rc geninfo_all_blocks=1 00:28:21.313 --rc geninfo_unexecuted_blocks=1 00:28:21.313 00:28:21.313 ' 00:28:21.313 04:50:28 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.313 --rc genhtml_branch_coverage=1 00:28:21.313 --rc genhtml_function_coverage=1 00:28:21.314 --rc genhtml_legend=1 00:28:21.314 --rc geninfo_all_blocks=1 00:28:21.314 --rc geninfo_unexecuted_blocks=1 00:28:21.314 00:28:21.314 ' 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:28:21.314 04:50:28 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74135 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:21.314 04:50:28 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74135 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74135 ']' 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.314 04:50:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.572 [2024-11-27 04:50:28.516183] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:28:21.572 [2024-11-27 04:50:28.516307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74135 ] 00:28:21.572 [2024-11-27 04:50:28.675234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:21.572 [2024-11-27 04:50:28.768360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.572 [2024-11-27 04:50:28.768431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:28:22.147 04:50:29 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.147 [2024-11-27 04:50:29.338086] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:22.147 [2024-11-27 04:50:29.339800] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.147 04:50:29 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.147 04:50:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.411 malloc0 00:28:22.411 04:50:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.411 04:50:29 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:28:22.411 04:50:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.411 04:50:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.411 [2024-11-27 04:50:29.434195] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:28:22.411 [2024-11-27 04:50:29.434288] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:28:22.411 [2024-11-27 04:50:29.434298] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:22.411 [2024-11-27 04:50:29.434306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:28:22.411 [2024-11-27 04:50:29.443183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:22.411 [2024-11-27 04:50:29.443202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:22.411 [2024-11-27 04:50:29.450094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:22.411 [2024-11-27 04:50:29.450215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:28:22.411 [2024-11-27 04:50:29.472098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:28:22.411 1 00:28:22.411 04:50:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.411 04:50:29 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:28:23.350 04:50:30 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74165 00:28:23.350 04:50:30 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:28:23.350 04:50:30 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:28:23.610 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:23.610 fio-3.35 00:28:23.610 Starting 1 process 00:28:28.892 04:50:35 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74135 00:28:28.892 04:50:35 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:28:34.187 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74135 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:28:34.187 04:50:40 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74270 00:28:34.187 04:50:40 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:34.187 04:50:40 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74270 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74270 ']' 00:28:34.187 04:50:40 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:34.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.187 04:50:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:34.187 [2024-11-27 04:50:40.601537] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:28:34.187 [2024-11-27 04:50:40.602492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74270 ] 00:28:34.187 [2024-11-27 04:50:40.791755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:34.187 [2024-11-27 04:50:40.923201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.187 [2024-11-27 04:50:40.923219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:28:34.450 04:50:41 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:34.450 [2024-11-27 04:50:41.548089] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:34.450 [2024-11-27 04:50:41.549972] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.450 04:50:41 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:34.450 malloc0 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.450 04:50:41 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.450 04:50:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:34.711 [2024-11-27 04:50:41.652219] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:28:34.711 [2024-11-27 04:50:41.652256] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:34.711 [2024-11-27 04:50:41.652266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:28:34.711 [2024-11-27 04:50:41.660125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:28:34.711 [2024-11-27 04:50:41.660148] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:28:34.711 [2024-11-27 04:50:41.660156] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:28:34.711 [2024-11-27 04:50:41.660234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:28:34.711 1 00:28:34.711 04:50:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.711 04:50:41 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74165 00:28:34.711 [2024-11-27 04:50:41.668104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:28:34.711 [2024-11-27 04:50:41.674639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:28:34.711 [2024-11-27 04:50:41.682289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:28:34.711 [2024-11-27 04:50:41.682310] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:29:31.046 00:29:31.046 fio_test: (groupid=0, jobs=1): err= 0: pid=74172: Wed Nov 27 04:51:30 2024 00:29:31.046 read: IOPS=24.9k, BW=97.4MiB/s (102MB/s)(5846MiB/60001msec) 00:29:31.046 slat (nsec): min=1100, max=719804, avg=5280.56, stdev=2344.94 00:29:31.046 clat (usec): min=633, max=6202.1k, avg=2545.21, stdev=41789.62 00:29:31.046 lat (usec): min=642, max=6202.1k, avg=2550.49, stdev=41789.62 00:29:31.046 clat percentiles (usec): 00:29:31.046 | 1.00th=[ 1729], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:29:31.046 | 30.00th=[ 1958], 40.00th=[ 2008], 50.00th=[ 2114], 60.00th=[ 2180], 00:29:31.046 | 70.00th=[ 2343], 80.00th=[ 2474], 90.00th=[ 2671], 95.00th=[ 3261], 00:29:31.046 | 99.00th=[ 4883], 99.50th=[ 5669], 99.90th=[ 6718], 99.95th=[ 7570], 00:29:31.046 | 99.99th=[12911] 00:29:31.046 bw ( KiB/s): min= 1192, max=127960, per=100.00%, avg=109809.12, stdev=17805.20, samples=108 00:29:31.046 iops : min= 298, max=31990, avg=27452.28, stdev=4451.30, samples=108 00:29:31.046 write: IOPS=24.9k, BW=97.3MiB/s (102MB/s)(5839MiB/60001msec); 0 zone resets 00:29:31.046 slat (nsec): min=1115, max=189739, avg=5393.02, stdev=2316.03 00:29:31.046 clat (usec): min=619, max=6202.2k, avg=2578.11, stdev=39278.99 00:29:31.046 lat (usec): min=631, max=6202.2k, avg=2583.50, stdev=39278.99 00:29:31.046 clat percentiles (usec): 00:29:31.046 | 1.00th=[ 1762], 5.00th=[ 1909], 10.00th=[ 1958], 20.00th=[ 1991], 00:29:31.046 | 30.00th=[ 2040], 40.00th=[ 2089], 50.00th=[ 2180], 60.00th=[ 2278], 00:29:31.046 | 70.00th=[ 2376], 80.00th=[ 2540], 90.00th=[ 2737], 95.00th=[ 3195], 00:29:31.046 | 99.00th=[ 4883], 99.50th=[ 5735], 99.90th=[ 6718], 99.95th=[ 7570], 00:29:31.046 | 99.99th=[ 9634] 00:29:31.046 bw ( KiB/s): min= 1224, max=126912, per=100.00%, avg=109680.31, stdev=17769.24, samples=108 00:29:31.046 iops : min= 306, max=31728, avg=27420.07, stdev=4442.31, samples=108 00:29:31.046 lat (usec) : 750=0.01%, 1000=0.01% 00:29:31.046 lat (msec) : 2=29.70%, 4=67.55%, 10=2.73%, 20=0.01%, >=2000=0.01% 00:29:31.046 cpu : usr=6.04%, sys=27.01%, ctx=100317, majf=0, minf=13 00:29:31.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:29:31.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.046 issued rwts: total=1496539,1494702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.046 00:29:31.046 Run status group 0 (all jobs): 00:29:31.046 READ: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=5846MiB (6130MB), run=60001-60001msec 00:29:31.046 WRITE: bw=97.3MiB/s (102MB/s), 97.3MiB/s-97.3MiB/s (102MB/s-102MB/s), io=5839MiB (6122MB), run=60001-60001msec 00:29:31.046 00:29:31.046 Disk stats (read/write): 00:29:31.046 ublkb1: ios=1493235/1491388, merge=0/0, ticks=3664371/3591288, in_queue=7255660, util=99.91% 00:29:31.046 04:51:30 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:31.046 [2024-11-27 04:51:30.732607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:29:31.046 [2024-11-27 04:51:30.778103] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:31.046 [2024-11-27 04:51:30.778246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:29:31.046 [2024-11-27 04:51:30.788088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:31.046 [2024-11-27 04:51:30.788196] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:29:31.046 [2024-11-27 04:51:30.788206] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.046 04:51:30 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:31.046 [2024-11-27 04:51:30.794166] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:31.046 [2024-11-27 04:51:30.798356] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:31.046 [2024-11-27 04:51:30.798391] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.046 04:51:30 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:29:31.046 04:51:30 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:29:31.046 04:51:30 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74270 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74270 ']' 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74270 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74270 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.046 killing process with pid 74270 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74270' 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74270 00:29:31.046 04:51:30 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74270 00:29:31.046 [2024-11-27 04:51:31.878340] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:31.046 [2024-11-27 04:51:31.878382] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:31.046 00:29:31.046 real 1m4.311s 00:29:31.046 user 1m44.072s 00:29:31.046 sys 0m33.513s 00:29:31.046 04:51:32 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.046 ************************************ 00:29:31.046 END TEST ublk_recovery 00:29:31.046 ************************************ 00:29:31.046 04:51:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:31.046 04:51:32 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:29:31.046 04:51:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:29:31.046 04:51:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:29:31.046 04:51:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.046 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:29:31.046 04:51:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:29:31.046 04:51:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:29:31.047 04:51:32 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:31.047 04:51:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:31.047 04:51:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.047 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:29:31.047 ************************************ 00:29:31.047 START TEST ftl 00:29:31.047 ************************************ 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:31.047 * Looking for test storage... 00:29:31.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.047 04:51:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.047 04:51:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.047 04:51:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.047 04:51:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.047 04:51:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.047 04:51:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:29:31.047 04:51:32 ftl -- scripts/common.sh@345 -- # : 1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.047 04:51:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.047 04:51:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@353 -- # local d=1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.047 04:51:32 ftl -- scripts/common.sh@355 -- # echo 1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.047 04:51:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@353 -- # local d=2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.047 04:51:32 ftl -- scripts/common.sh@355 -- # echo 2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.047 04:51:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.047 04:51:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.047 04:51:32 ftl -- scripts/common.sh@368 -- # return 0 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.047 --rc genhtml_branch_coverage=1 00:29:31.047 --rc genhtml_function_coverage=1 00:29:31.047 --rc genhtml_legend=1 00:29:31.047 --rc geninfo_all_blocks=1 00:29:31.047 --rc geninfo_unexecuted_blocks=1 00:29:31.047 00:29:31.047 ' 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.047 --rc genhtml_branch_coverage=1 00:29:31.047 --rc genhtml_function_coverage=1 00:29:31.047 --rc genhtml_legend=1 00:29:31.047 --rc geninfo_all_blocks=1 00:29:31.047 --rc geninfo_unexecuted_blocks=1 00:29:31.047 00:29:31.047 ' 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.047 --rc genhtml_branch_coverage=1 00:29:31.047 --rc genhtml_function_coverage=1 00:29:31.047 --rc genhtml_legend=1 00:29:31.047 --rc geninfo_all_blocks=1 00:29:31.047 --rc geninfo_unexecuted_blocks=1 00:29:31.047 00:29:31.047 ' 00:29:31.047 04:51:32 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.047 --rc genhtml_branch_coverage=1 00:29:31.047 --rc genhtml_function_coverage=1 00:29:31.047 --rc genhtml_legend=1 00:29:31.047 --rc geninfo_all_blocks=1 00:29:31.047 --rc geninfo_unexecuted_blocks=1 00:29:31.047 00:29:31.047 ' 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:31.047 04:51:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:31.047 04:51:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.047 04:51:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.047 04:51:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:31.047 04:51:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:31.047 04:51:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.047 04:51:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.047 04:51:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.047 04:51:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.047 04:51:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.047 04:51:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:31.047 04:51:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:31.047 04:51:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.047 04:51:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.047 04:51:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:31.047 04:51:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.047 04:51:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.047 04:51:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.047 04:51:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.047 04:51:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:31.047 04:51:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:31.047 04:51:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.047 04:51:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:29:31.047 04:51:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:31.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:31.047 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:31.047 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:31.047 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:31.047 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:31.047 04:51:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:29:31.047 04:51:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75075 00:29:31.047 04:51:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75075 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 75075 ']' 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.047 04:51:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:31.047 [2024-11-27 04:51:33.357417] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:29:31.047 [2024-11-27 04:51:33.357544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75075 ] 00:29:31.047 [2024-11-27 04:51:33.519633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.047 [2024-11-27 04:51:33.618095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.047 04:51:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.047 04:51:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:29:31.047 04:51:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:29:31.047 04:51:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:29:31.047 04:51:35 ftl -- ftl/ftl.sh@50 -- # break 00:29:31.048 04:51:35 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:29:31.048 04:51:35 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:29:31.048 04:51:35 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:29:31.048 04:51:35 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:29:31.048 04:51:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:29:31.048 04:51:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:29:31.048 04:51:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:29:31.048 04:51:36 ftl -- ftl/ftl.sh@63 -- # break 00:29:31.048 04:51:36 ftl -- ftl/ftl.sh@66 -- # killprocess 75075 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 75075 ']' 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@958 -- # kill -0 75075 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@959 -- # uname 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75075 00:29:31.048 killing process with pid 75075 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75075' 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@973 -- # kill 75075 00:29:31.048 04:51:36 ftl -- common/autotest_common.sh@978 -- # wait 75075 00:29:31.048 04:51:37 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:29:31.048 04:51:37 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:29:31.048 04:51:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:31.048 04:51:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.048 04:51:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:31.048 ************************************ 00:29:31.048 START TEST ftl_fio_basic 00:29:31.048 ************************************ 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:29:31.048 * Looking for test storage... 00:29:31.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.048 --rc genhtml_branch_coverage=1 00:29:31.048 --rc genhtml_function_coverage=1 00:29:31.048 --rc genhtml_legend=1 00:29:31.048 --rc geninfo_all_blocks=1 00:29:31.048 --rc geninfo_unexecuted_blocks=1 00:29:31.048 00:29:31.048 ' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.048 --rc genhtml_branch_coverage=1 00:29:31.048 --rc genhtml_function_coverage=1 00:29:31.048 --rc genhtml_legend=1 00:29:31.048 --rc geninfo_all_blocks=1 00:29:31.048 --rc geninfo_unexecuted_blocks=1 00:29:31.048 00:29:31.048 ' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.048 --rc genhtml_branch_coverage=1 00:29:31.048 --rc genhtml_function_coverage=1 00:29:31.048 --rc genhtml_legend=1 00:29:31.048 --rc geninfo_all_blocks=1 00:29:31.048 --rc geninfo_unexecuted_blocks=1 00:29:31.048 00:29:31.048 ' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.048 --rc genhtml_branch_coverage=1 00:29:31.048 --rc genhtml_function_coverage=1 00:29:31.048 --rc genhtml_legend=1 00:29:31.048 --rc geninfo_all_blocks=1 00:29:31.048 --rc geninfo_unexecuted_blocks=1 00:29:31.048 00:29:31.048 ' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:31.048 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75208 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75208 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75208 ']' 00:29:31.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.049 04:51:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:31.049 [2024-11-27 04:51:37.950092] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:29:31.049 [2024-11-27 04:51:37.950204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75208 ] 00:29:31.049 [2024-11-27 04:51:38.108711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.049 [2024-11-27 04:51:38.207713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.049 [2024-11-27 04:51:38.208120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.049 [2024-11-27 04:51:38.208211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:29:31.622 04:51:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:29:31.883 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:32.146 { 00:29:32.146 "name": "nvme0n1", 00:29:32.146 "aliases": [ 00:29:32.146 "313bf15e-df22-48b3-835a-53ecdee32cf5" 00:29:32.146 ], 00:29:32.146 "product_name": "NVMe disk", 00:29:32.146 "block_size": 4096, 00:29:32.146 "num_blocks": 1310720, 00:29:32.146 "uuid": "313bf15e-df22-48b3-835a-53ecdee32cf5", 00:29:32.146 "numa_id": -1, 00:29:32.146 "assigned_rate_limits": { 00:29:32.146 "rw_ios_per_sec": 0, 00:29:32.146 "rw_mbytes_per_sec": 0, 00:29:32.146 "r_mbytes_per_sec": 0, 00:29:32.146 "w_mbytes_per_sec": 0 00:29:32.146 }, 00:29:32.146 "claimed": false, 00:29:32.146 "zoned": false, 00:29:32.146 "supported_io_types": { 00:29:32.146 "read": true, 00:29:32.146 "write": true, 00:29:32.146 "unmap": true, 00:29:32.146 "flush": true, 00:29:32.146 "reset": true, 00:29:32.146 "nvme_admin": true, 00:29:32.146 "nvme_io": true, 00:29:32.146 "nvme_io_md": false, 00:29:32.146 "write_zeroes": true, 00:29:32.146 "zcopy": false, 00:29:32.146 "get_zone_info": false, 00:29:32.146 "zone_management": false, 00:29:32.146 "zone_append": false, 00:29:32.146 "compare": true, 00:29:32.146 "compare_and_write": false, 00:29:32.146 "abort": true, 00:29:32.146 "seek_hole": false, 00:29:32.146 "seek_data": false, 00:29:32.146 "copy": true, 00:29:32.146 "nvme_iov_md": false 00:29:32.146 }, 00:29:32.146 "driver_specific": { 00:29:32.146 "nvme": [ 00:29:32.146 { 00:29:32.146 "pci_address": "0000:00:11.0", 00:29:32.146 "trid": { 00:29:32.146 "trtype": "PCIe", 00:29:32.146 "traddr": "0000:00:11.0" 00:29:32.146 }, 00:29:32.146 "ctrlr_data": { 00:29:32.146 "cntlid": 0, 00:29:32.146 "vendor_id": "0x1b36", 00:29:32.146 "model_number": "QEMU NVMe Ctrl", 00:29:32.146 "serial_number": "12341", 00:29:32.146 "firmware_revision": "8.0.0", 00:29:32.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:32.146 "oacs": { 00:29:32.146 "security": 0, 00:29:32.146 "format": 1, 00:29:32.146 "firmware": 0, 00:29:32.146 "ns_manage": 1 00:29:32.146 }, 00:29:32.146 "multi_ctrlr": false, 00:29:32.146 "ana_reporting": false 00:29:32.146 }, 00:29:32.146 "vs": { 00:29:32.146 "nvme_version": "1.4" 00:29:32.146 }, 00:29:32.146 "ns_data": { 00:29:32.146 "id": 1, 00:29:32.146 "can_share": false 00:29:32.146 } 00:29:32.146 } 00:29:32.146 ], 00:29:32.146 "mp_policy": "active_passive" 00:29:32.146 } 00:29:32.146 } 00:29:32.146 ]' 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:32.146 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:32.406 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:29:32.406 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:32.665 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e2a2ea63-b552-4617-a061-20a80bb3ab9d 00:29:32.665 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e2a2ea63-b552-4617-a061-20a80bb3ab9d 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:29:32.926 04:51:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.187 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:33.187 { 00:29:33.187 "name": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.187 "aliases": [ 00:29:33.187 "lvs/nvme0n1p0" 00:29:33.187 ], 00:29:33.187 "product_name": "Logical Volume", 00:29:33.187 "block_size": 4096, 00:29:33.187 "num_blocks": 26476544, 00:29:33.187 "uuid": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.187 "assigned_rate_limits": { 00:29:33.187 "rw_ios_per_sec": 0, 00:29:33.187 "rw_mbytes_per_sec": 0, 00:29:33.187 "r_mbytes_per_sec": 0, 00:29:33.187 "w_mbytes_per_sec": 0 00:29:33.187 }, 00:29:33.187 "claimed": false, 00:29:33.187 "zoned": false, 00:29:33.187 "supported_io_types": { 00:29:33.187 "read": true, 00:29:33.187 "write": true, 00:29:33.187 "unmap": true, 00:29:33.187 "flush": false, 00:29:33.187 "reset": true, 00:29:33.187 "nvme_admin": false, 00:29:33.187 "nvme_io": false, 00:29:33.187 "nvme_io_md": false, 00:29:33.187 "write_zeroes": true, 00:29:33.187 "zcopy": false, 00:29:33.187 "get_zone_info": false, 00:29:33.187 "zone_management": false, 00:29:33.187 "zone_append": false, 00:29:33.187 "compare": false, 00:29:33.187 "compare_and_write": false, 00:29:33.187 "abort": false, 00:29:33.187 "seek_hole": true, 00:29:33.187 "seek_data": true, 00:29:33.187 "copy": false, 00:29:33.187 "nvme_iov_md": false 00:29:33.187 }, 00:29:33.187 "driver_specific": { 00:29:33.187 "lvol": { 00:29:33.187 "lvol_store_uuid": "e2a2ea63-b552-4617-a061-20a80bb3ab9d", 00:29:33.187 "base_bdev": "nvme0n1", 00:29:33.187 "thin_provision": true, 00:29:33.188 "num_allocated_clusters": 0, 00:29:33.188 "snapshot": false, 00:29:33.188 "clone": false, 00:29:33.188 "esnap_clone": false 00:29:33.188 } 00:29:33.188 } 00:29:33.188 } 00:29:33.188 ]' 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:29:33.188 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:29:33.450 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:33.712 { 00:29:33.712 "name": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.712 "aliases": [ 00:29:33.712 "lvs/nvme0n1p0" 00:29:33.712 ], 00:29:33.712 "product_name": "Logical Volume", 00:29:33.712 "block_size": 4096, 00:29:33.712 "num_blocks": 26476544, 00:29:33.712 "uuid": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.712 "assigned_rate_limits": { 00:29:33.712 "rw_ios_per_sec": 0, 00:29:33.712 "rw_mbytes_per_sec": 0, 00:29:33.712 "r_mbytes_per_sec": 0, 00:29:33.712 "w_mbytes_per_sec": 0 00:29:33.712 }, 00:29:33.712 "claimed": false, 00:29:33.712 "zoned": false, 00:29:33.712 "supported_io_types": { 00:29:33.712 "read": true, 00:29:33.712 "write": true, 00:29:33.712 "unmap": true, 00:29:33.712 "flush": false, 00:29:33.712 "reset": true, 00:29:33.712 "nvme_admin": false, 00:29:33.712 "nvme_io": false, 00:29:33.712 "nvme_io_md": false, 00:29:33.712 "write_zeroes": true, 00:29:33.712 "zcopy": false, 00:29:33.712 "get_zone_info": false, 00:29:33.712 "zone_management": false, 00:29:33.712 "zone_append": false, 00:29:33.712 "compare": false, 00:29:33.712 "compare_and_write": false, 00:29:33.712 "abort": false, 00:29:33.712 "seek_hole": true, 00:29:33.712 "seek_data": true, 00:29:33.712 "copy": false, 00:29:33.712 "nvme_iov_md": false 00:29:33.712 }, 00:29:33.712 "driver_specific": { 00:29:33.712 "lvol": { 00:29:33.712 "lvol_store_uuid": "e2a2ea63-b552-4617-a061-20a80bb3ab9d", 00:29:33.712 "base_bdev": "nvme0n1", 00:29:33.712 "thin_provision": true, 00:29:33.712 "num_allocated_clusters": 0, 00:29:33.712 "snapshot": false, 00:29:33.712 "clone": false, 00:29:33.712 "esnap_clone": false 00:29:33.712 } 00:29:33.712 } 00:29:33.712 } 00:29:33.712 ]' 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:29:33.712 04:51:40 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:29:33.975 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:29:33.975 04:51:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9996508-720c-4d5d-a2ed-698fcccf0d99 00:29:33.975 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:33.975 { 00:29:33.975 "name": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.975 "aliases": [ 00:29:33.975 "lvs/nvme0n1p0" 00:29:33.975 ], 00:29:33.975 "product_name": "Logical Volume", 00:29:33.975 "block_size": 4096, 00:29:33.975 "num_blocks": 26476544, 00:29:33.975 "uuid": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:33.975 "assigned_rate_limits": { 00:29:33.975 "rw_ios_per_sec": 0, 00:29:33.975 "rw_mbytes_per_sec": 0, 00:29:33.975 "r_mbytes_per_sec": 0, 00:29:33.975 "w_mbytes_per_sec": 0 00:29:33.975 }, 00:29:33.975 "claimed": false, 00:29:33.975 "zoned": false, 00:29:33.975 "supported_io_types": { 00:29:33.975 "read": true, 00:29:33.975 "write": true, 00:29:33.975 "unmap": true, 00:29:33.975 "flush": false, 00:29:33.975 "reset": true, 00:29:33.975 "nvme_admin": false, 00:29:33.975 "nvme_io": false, 00:29:33.975 "nvme_io_md": false, 00:29:33.975 "write_zeroes": true, 00:29:33.975 "zcopy": false, 00:29:33.975 "get_zone_info": false, 00:29:33.975 "zone_management": false, 00:29:33.975 "zone_append": false, 00:29:33.975 "compare": false, 00:29:33.975 "compare_and_write": false, 00:29:33.975 "abort": false, 00:29:33.975 "seek_hole": true, 00:29:33.975 "seek_data": true, 00:29:33.975 "copy": false, 00:29:33.975 "nvme_iov_md": false 00:29:33.975 }, 00:29:33.975 "driver_specific": { 00:29:33.975 "lvol": { 00:29:33.975 "lvol_store_uuid": "e2a2ea63-b552-4617-a061-20a80bb3ab9d", 00:29:33.975 "base_bdev": "nvme0n1", 00:29:33.975 "thin_provision": true, 00:29:33.975 "num_allocated_clusters": 0, 00:29:33.975 "snapshot": false, 00:29:33.975 "clone": false, 00:29:33.975 "esnap_clone": false 00:29:33.975 } 00:29:33.975 } 00:29:33.975 } 00:29:33.975 ]' 00:29:33.975 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:33.975 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:29:33.975 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:34.236 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:34.236 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:34.236 04:51:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:29:34.236 04:51:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:29:34.236 04:51:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:29:34.237 04:51:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a9996508-720c-4d5d-a2ed-698fcccf0d99 -c nvc0n1p0 --l2p_dram_limit 60 00:29:34.237 [2024-11-27 04:51:41.381833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.381883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:34.237 [2024-11-27 04:51:41.381900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:34.237 [2024-11-27 04:51:41.381908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.381983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.381994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:34.237 [2024-11-27 04:51:41.382005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:34.237 [2024-11-27 04:51:41.382013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.382045] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:34.237 [2024-11-27 04:51:41.382829] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:34.237 [2024-11-27 04:51:41.382857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.382866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:34.237 [2024-11-27 04:51:41.382876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:29:34.237 [2024-11-27 04:51:41.382883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.383005] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ca6f189b-eb3c-4b2c-aaa0-6c1c0b6ad38f 00:29:34.237 [2024-11-27 04:51:41.384103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.384133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:34.237 [2024-11-27 04:51:41.384143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:34.237 [2024-11-27 04:51:41.384153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.389337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.389500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:34.237 [2024-11-27 04:51:41.389516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.117 ms 00:29:34.237 [2024-11-27 04:51:41.389529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.389627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.389638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:34.237 [2024-11-27 04:51:41.389646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:34.237 [2024-11-27 04:51:41.389659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.389715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.389727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:34.237 [2024-11-27 04:51:41.389735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:34.237 [2024-11-27 04:51:41.389744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.389770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:34.237 [2024-11-27 04:51:41.393363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.393392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:34.237 [2024-11-27 04:51:41.393408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:29:34.237 [2024-11-27 04:51:41.393416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.393460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.393468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:34.237 [2024-11-27 04:51:41.393479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:34.237 [2024-11-27 04:51:41.393487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.393510] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:34.237 [2024-11-27 04:51:41.393658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:34.237 [2024-11-27 04:51:41.393674] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:34.237 [2024-11-27 04:51:41.393686] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:34.237 [2024-11-27 04:51:41.393698] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:34.237 [2024-11-27 04:51:41.393708] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:34.237 [2024-11-27 04:51:41.393719] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:34.237 [2024-11-27 04:51:41.393728] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:34.237 [2024-11-27 04:51:41.393738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:34.237 [2024-11-27 04:51:41.393745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:34.237 [2024-11-27 04:51:41.393757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.393766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:34.237 [2024-11-27 04:51:41.393777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:29:34.237 [2024-11-27 04:51:41.393785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.393875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.237 [2024-11-27 04:51:41.393883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:34.237 [2024-11-27 04:51:41.393893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:34.237 [2024-11-27 04:51:41.393901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.237 [2024-11-27 04:51:41.394034] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:34.237 [2024-11-27 04:51:41.394046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:34.237 [2024-11-27 04:51:41.394058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:34.237 [2024-11-27 04:51:41.394101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:34.237 [2024-11-27 04:51:41.394128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:34.237 [2024-11-27 04:51:41.394147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:34.237 [2024-11-27 04:51:41.394154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:34.237 [2024-11-27 04:51:41.394163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:34.237 [2024-11-27 04:51:41.394170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:34.237 [2024-11-27 04:51:41.394178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:34.237 [2024-11-27 04:51:41.394185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:34.237 [2024-11-27 04:51:41.394203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:34.237 [2024-11-27 04:51:41.394226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:34.237 [2024-11-27 04:51:41.394246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:34.237 [2024-11-27 04:51:41.394268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:34.237 [2024-11-27 04:51:41.394289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:34.237 [2024-11-27 04:51:41.394303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:34.237 [2024-11-27 04:51:41.394312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:34.237 [2024-11-27 04:51:41.394339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:34.237 [2024-11-27 04:51:41.394345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:34.237 [2024-11-27 04:51:41.394353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:34.237 [2024-11-27 04:51:41.394359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:34.237 [2024-11-27 04:51:41.394367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:34.237 [2024-11-27 04:51:41.394374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.237 [2024-11-27 04:51:41.394381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:34.237 [2024-11-27 04:51:41.394388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:34.238 [2024-11-27 04:51:41.394398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.238 [2024-11-27 04:51:41.394404] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:34.238 [2024-11-27 04:51:41.394413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:34.238 [2024-11-27 04:51:41.394420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:34.238 [2024-11-27 04:51:41.394429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:34.238 [2024-11-27 04:51:41.394437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:34.238 [2024-11-27 04:51:41.394447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:34.238 [2024-11-27 04:51:41.394453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:34.238 [2024-11-27 04:51:41.394462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:34.238 [2024-11-27 04:51:41.394468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:34.238 [2024-11-27 04:51:41.394485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:34.238 [2024-11-27 04:51:41.394495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:34.238 [2024-11-27 04:51:41.394506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:34.238 [2024-11-27 04:51:41.394522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:34.238 [2024-11-27 04:51:41.394529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:34.238 [2024-11-27 04:51:41.394537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:34.238 [2024-11-27 04:51:41.394544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:34.238 [2024-11-27 04:51:41.394552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:34.238 [2024-11-27 04:51:41.394559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:34.238 [2024-11-27 04:51:41.394567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:34.238 [2024-11-27 04:51:41.394574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:34.238 [2024-11-27 04:51:41.394584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:34.238 [2024-11-27 04:51:41.394624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:34.238 [2024-11-27 04:51:41.394635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:34.238 [2024-11-27 04:51:41.394652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:34.238 [2024-11-27 04:51:41.394659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:34.238 [2024-11-27 04:51:41.394669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:34.238 [2024-11-27 04:51:41.394676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.238 [2024-11-27 04:51:41.394685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:34.238 [2024-11-27 04:51:41.394694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:29:34.238 [2024-11-27 04:51:41.394702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.238 [2024-11-27 04:51:41.394767] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:34.238 [2024-11-27 04:51:41.394781] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:38.431 [2024-11-27 04:51:45.054121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.054185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:38.431 [2024-11-27 04:51:45.054200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3659.339 ms 00:29:38.431 [2024-11-27 04:51:45.054210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.079181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.079225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.431 [2024-11-27 04:51:45.079237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.775 ms 00:29:38.431 [2024-11-27 04:51:45.079247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.079368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.079380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:38.431 [2024-11-27 04:51:45.079389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:38.431 [2024-11-27 04:51:45.079400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.128014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.128061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.431 [2024-11-27 04:51:45.128088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.571 ms 00:29:38.431 [2024-11-27 04:51:45.128100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.128140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.128152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:38.431 [2024-11-27 04:51:45.128161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:38.431 [2024-11-27 04:51:45.128169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.128517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.128540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:38.431 [2024-11-27 04:51:45.128551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:29:38.431 [2024-11-27 04:51:45.128560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.128686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.128697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:38.431 [2024-11-27 04:51:45.128705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:38.431 [2024-11-27 04:51:45.128715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.142841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.142874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:38.431 [2024-11-27 04:51:45.142884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.101 ms 00:29:38.431 [2024-11-27 04:51:45.142893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.154016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:38.431 [2024-11-27 04:51:45.167688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.167722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:38.431 [2024-11-27 04:51:45.167738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.710 ms 00:29:38.431 [2024-11-27 04:51:45.167745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.216455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.216500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:38.431 [2024-11-27 04:51:45.216516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.673 ms 00:29:38.431 [2024-11-27 04:51:45.216524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.216707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.216718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:38.431 [2024-11-27 04:51:45.216730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:29:38.431 [2024-11-27 04:51:45.216737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.239248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.239388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:38.431 [2024-11-27 04:51:45.239408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.458 ms 00:29:38.431 [2024-11-27 04:51:45.239416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.261202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.261232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:38.431 [2024-11-27 04:51:45.261245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.747 ms 00:29:38.431 [2024-11-27 04:51:45.261251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.261820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.261836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:38.431 [2024-11-27 04:51:45.261847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:29:38.431 [2024-11-27 04:51:45.261854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.324426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.324471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:38.431 [2024-11-27 04:51:45.324491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.527 ms 00:29:38.431 [2024-11-27 04:51:45.324500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.348138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.348176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:38.431 [2024-11-27 04:51:45.348190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.534 ms 00:29:38.431 [2024-11-27 04:51:45.348199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.370997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.371153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:38.431 [2024-11-27 04:51:45.371174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.754 ms 00:29:38.431 [2024-11-27 04:51:45.371182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.400486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.400539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:38.431 [2024-11-27 04:51:45.400561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.264 ms 00:29:38.431 [2024-11-27 04:51:45.400572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.400643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.400657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:38.431 [2024-11-27 04:51:45.400679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:38.431 [2024-11-27 04:51:45.400690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.400823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.431 [2024-11-27 04:51:45.400838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:38.431 [2024-11-27 04:51:45.400852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:38.431 [2024-11-27 04:51:45.400863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.431 [2024-11-27 04:51:45.402477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4020.186 ms, result 0 00:29:38.431 { 00:29:38.431 "name": "ftl0", 00:29:38.431 "uuid": "ca6f189b-eb3c-4b2c-aaa0-6c1c0b6ad38f" 00:29:38.431 } 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.432 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:38.690 [ 00:29:38.690 { 00:29:38.690 "name": "ftl0", 00:29:38.690 "aliases": [ 00:29:38.690 "ca6f189b-eb3c-4b2c-aaa0-6c1c0b6ad38f" 00:29:38.690 ], 00:29:38.690 "product_name": "FTL disk", 00:29:38.690 "block_size": 4096, 00:29:38.690 "num_blocks": 20971520, 00:29:38.690 "uuid": "ca6f189b-eb3c-4b2c-aaa0-6c1c0b6ad38f", 00:29:38.690 "assigned_rate_limits": { 00:29:38.690 "rw_ios_per_sec": 0, 00:29:38.690 "rw_mbytes_per_sec": 0, 00:29:38.690 "r_mbytes_per_sec": 0, 00:29:38.690 "w_mbytes_per_sec": 0 00:29:38.690 }, 00:29:38.690 "claimed": false, 00:29:38.690 "zoned": false, 00:29:38.690 "supported_io_types": { 00:29:38.690 "read": true, 00:29:38.690 "write": true, 00:29:38.690 "unmap": true, 00:29:38.690 "flush": true, 00:29:38.690 "reset": false, 00:29:38.690 "nvme_admin": false, 00:29:38.690 "nvme_io": false, 00:29:38.690 "nvme_io_md": false, 00:29:38.690 "write_zeroes": true, 00:29:38.690 "zcopy": false, 00:29:38.690 "get_zone_info": false, 00:29:38.690 "zone_management": false, 00:29:38.690 "zone_append": false, 00:29:38.690 "compare": false, 00:29:38.690 "compare_and_write": false, 00:29:38.690 "abort": false, 00:29:38.690 "seek_hole": false, 00:29:38.690 "seek_data": false, 00:29:38.690 "copy": false, 00:29:38.690 "nvme_iov_md": false 00:29:38.690 }, 00:29:38.690 "driver_specific": { 00:29:38.690 "ftl": { 00:29:38.690 "base_bdev": "a9996508-720c-4d5d-a2ed-698fcccf0d99", 00:29:38.690 "cache": "nvc0n1p0" 00:29:38.690 } 00:29:38.690 } 00:29:38.690 } 00:29:38.690 ] 00:29:38.690 04:51:45 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:29:38.690 04:51:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:29:38.690 04:51:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:38.948 04:51:46 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:29:38.948 04:51:46 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:39.209 [2024-11-27 04:51:46.206589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.206729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:39.209 [2024-11-27 04:51:46.206785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:39.209 [2024-11-27 04:51:46.206814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.206856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:39.209 [2024-11-27 04:51:46.209493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.209594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:39.209 [2024-11-27 04:51:46.209652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:29:39.209 [2024-11-27 04:51:46.209674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.210118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.210187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:39.209 [2024-11-27 04:51:46.210235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:29:39.209 [2024-11-27 04:51:46.210257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.213514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.213582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:39.209 [2024-11-27 04:51:46.213643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:29:39.209 [2024-11-27 04:51:46.213666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.219879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.219972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:39.209 [2024-11-27 04:51:46.220023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:29:39.209 [2024-11-27 04:51:46.220045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.243181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.243296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:39.209 [2024-11-27 04:51:46.243366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.033 ms 00:29:39.209 [2024-11-27 04:51:46.243389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.258039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.258162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:39.209 [2024-11-27 04:51:46.258183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.589 ms 00:29:39.209 [2024-11-27 04:51:46.258191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.258365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.258376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:39.209 [2024-11-27 04:51:46.258386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:39.209 [2024-11-27 04:51:46.258393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.280991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.281113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:39.209 [2024-11-27 04:51:46.281131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.575 ms 00:29:39.209 [2024-11-27 04:51:46.281138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.303675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.303704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:39.209 [2024-11-27 04:51:46.303715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.502 ms 00:29:39.209 [2024-11-27 04:51:46.303723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.325618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.325648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:39.209 [2024-11-27 04:51:46.325659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.852 ms 00:29:39.209 [2024-11-27 04:51:46.325666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.347896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.209 [2024-11-27 04:51:46.348000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:39.209 [2024-11-27 04:51:46.348018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.136 ms 00:29:39.209 [2024-11-27 04:51:46.348025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.209 [2024-11-27 04:51:46.348063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:39.209 [2024-11-27 04:51:46.348093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:39.209 [2024-11-27 04:51:46.348197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:39.210 [2024-11-27 04:51:46.348946] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:39.210 [2024-11-27 04:51:46.348955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ca6f189b-eb3c-4b2c-aaa0-6c1c0b6ad38f 00:29:39.210 [2024-11-27 04:51:46.348963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:39.211 [2024-11-27 04:51:46.348973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:39.211 [2024-11-27 04:51:46.348982] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:39.211 [2024-11-27 04:51:46.348991] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:39.211 [2024-11-27 04:51:46.348997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:39.211 [2024-11-27 04:51:46.349006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:39.211 [2024-11-27 04:51:46.349013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:39.211 [2024-11-27 04:51:46.349021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:39.211 [2024-11-27 04:51:46.349027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:39.211 [2024-11-27 04:51:46.349036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.211 [2024-11-27 04:51:46.349043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:39.211 [2024-11-27 04:51:46.349053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:29:39.211 [2024-11-27 04:51:46.349060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.361212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.211 [2024-11-27 04:51:46.361240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:39.211 [2024-11-27 04:51:46.361251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.105 ms 00:29:39.211 [2024-11-27 04:51:46.361258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.361617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.211 [2024-11-27 04:51:46.361627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:39.211 [2024-11-27 04:51:46.361637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:29:39.211 [2024-11-27 04:51:46.361644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.405093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.211 [2024-11-27 04:51:46.405127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:39.211 [2024-11-27 04:51:46.405139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.211 [2024-11-27 04:51:46.405146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.405207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.211 [2024-11-27 04:51:46.405215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:39.211 [2024-11-27 04:51:46.405224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.211 [2024-11-27 04:51:46.405232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.405328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.211 [2024-11-27 04:51:46.405340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:39.211 [2024-11-27 04:51:46.405350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.211 [2024-11-27 04:51:46.405364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.211 [2024-11-27 04:51:46.405389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.211 [2024-11-27 04:51:46.405397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:39.211 [2024-11-27 04:51:46.405406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.211 [2024-11-27 04:51:46.405413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.486610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.486752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:39.470 [2024-11-27 04:51:46.486771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.486778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:39.470 [2024-11-27 04:51:46.549507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.549515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:39.470 [2024-11-27 04:51:46.549613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.549620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:39.470 [2024-11-27 04:51:46.549715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.549722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:39.470 [2024-11-27 04:51:46.549842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.549849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:39.470 [2024-11-27 04:51:46.549915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.549922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.549962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.549971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:39.470 [2024-11-27 04:51:46.549992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.550001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.550047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.470 [2024-11-27 04:51:46.550057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:39.470 [2024-11-27 04:51:46.550080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.470 [2024-11-27 04:51:46.550087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.470 [2024-11-27 04:51:46.550239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.622 ms, result 0 00:29:39.470 true 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75208 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75208 ']' 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75208 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75208 00:29:39.470 killing process with pid 75208 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75208' 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75208 00:29:39.470 04:51:46 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75208 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:46.124 04:51:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:46.124 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:29:46.124 fio-3.35 00:29:46.124 Starting 1 thread 00:29:50.324 00:29:50.324 test: (groupid=0, jobs=1): err= 0: pid=75405: Wed Nov 27 04:51:56 2024 00:29:50.324 read: IOPS=1276, BW=84.8MiB/s (88.9MB/s)(255MiB/3002msec) 00:29:50.324 slat (nsec): min=3069, max=96792, avg=4511.82, stdev=2532.98 00:29:50.324 clat (usec): min=249, max=941, avg=352.26, stdev=77.54 00:29:50.324 lat (usec): min=253, max=946, avg=356.78, stdev=78.02 00:29:50.324 clat percentiles (usec): 00:29:50.324 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 310], 00:29:50.324 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 326], 00:29:50.324 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 474], 95.00th=[ 529], 00:29:50.324 | 99.00th=[ 611], 99.50th=[ 676], 99.90th=[ 857], 99.95th=[ 914], 00:29:50.324 | 99.99th=[ 938] 00:29:50.324 write: IOPS=1286, BW=85.4MiB/s (89.6MB/s)(256MiB/2998msec); 0 zone resets 00:29:50.324 slat (nsec): min=13665, max=56256, avg=18925.75, stdev=3405.52 00:29:50.324 clat (usec): min=256, max=1417, avg=391.56, stdev=116.07 00:29:50.324 lat (usec): min=274, max=1445, avg=410.49, stdev=116.51 00:29:50.324 clat percentiles (usec): 00:29:50.324 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 338], 00:29:50.324 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:29:50.324 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 553], 95.00th=[ 619], 00:29:50.324 | 99.00th=[ 914], 99.50th=[ 1057], 99.90th=[ 1270], 99.95th=[ 1401], 00:29:50.324 | 99.99th=[ 1418] 00:29:50.324 bw ( KiB/s): min=76568, max=93704, per=99.93%, avg=87402.67, stdev=8308.55, samples=6 00:29:50.324 iops : min= 1126, max= 1378, avg=1285.33, stdev=122.18, samples=6 00:29:50.324 lat (usec) : 250=0.01%, 500=89.28%, 750=9.73%, 1000=0.59% 00:29:50.324 lat (msec) : 2=0.39% 00:29:50.324 cpu : usr=99.13%, sys=0.20%, ctx=15, majf=0, minf=1169 00:29:50.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.324 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:50.324 00:29:50.324 Run status group 0 (all jobs): 00:29:50.324 READ: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=255MiB (267MB), run=3002-3002msec 00:29:50.324 WRITE: bw=85.4MiB/s (89.6MB/s), 85.4MiB/s-85.4MiB/s (89.6MB/s-89.6MB/s), io=256MiB (269MB), run=2998-2998msec 00:29:51.259 ----------------------------------------------------- 00:29:51.259 Suppressions used: 00:29:51.259 count bytes template 00:29:51.259 1 5 /usr/src/fio/parse.c 00:29:51.259 1 8 libtcmalloc_minimal.so 00:29:51.259 1 904 libcrypto.so 00:29:51.259 ----------------------------------------------------- 00:29:51.259 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:51.259 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:51.260 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:29:51.260 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:51.260 04:51:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:51.521 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:29:51.521 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:29:51.521 fio-3.35 00:29:51.521 Starting 2 threads 00:30:18.072 00:30:18.072 first_half: (groupid=0, jobs=1): err= 0: pid=75505: Wed Nov 27 04:52:22 2024 00:30:18.072 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(255MiB/22837msec) 00:30:18.072 slat (nsec): min=3115, max=31228, avg=3898.58, stdev=768.88 00:30:18.072 clat (usec): min=679, max=331239, avg=35423.79, stdev=19030.09 00:30:18.072 lat (usec): min=683, max=331244, avg=35427.69, stdev=19030.12 00:30:18.072 clat percentiles (msec): 00:30:18.072 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:30:18.072 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:30:18.072 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 56], 00:30:18.072 | 99.00th=[ 142], 99.50th=[ 165], 99.90th=[ 203], 99.95th=[ 271], 00:30:18.072 | 99.99th=[ 321] 00:30:18.072 write: IOPS=3235, BW=12.6MiB/s (13.3MB/s)(256MiB/20255msec); 0 zone resets 00:30:18.072 slat (usec): min=3, max=929, avg= 5.75, stdev= 7.42 00:30:18.072 clat (usec): min=354, max=84304, avg=9284.49, stdev=14631.67 00:30:18.072 lat (usec): min=360, max=84310, avg=9290.24, stdev=14631.85 00:30:18.072 clat percentiles (usec): 00:30:18.072 | 1.00th=[ 725], 5.00th=[ 971], 10.00th=[ 1172], 20.00th=[ 1696], 00:30:18.072 | 30.00th=[ 3228], 40.00th=[ 4424], 50.00th=[ 5145], 60.00th=[ 5866], 00:30:18.072 | 70.00th=[ 7046], 80.00th=[11076], 90.00th=[16712], 95.00th=[33817], 00:30:18.072 | 99.00th=[72877], 99.50th=[73925], 99.90th=[79168], 99.95th=[81265], 00:30:18.072 | 99.99th=[83362] 00:30:18.072 bw ( KiB/s): min= 936, max=42208, per=96.44%, avg=24963.29, stdev=12302.86, samples=21 00:30:18.072 iops : min= 234, max=10552, avg=6240.81, stdev=3075.71, samples=21 00:30:18.072 lat (usec) : 500=0.03%, 750=0.64%, 1000=2.22% 00:30:18.072 lat (msec) : 2=8.11%, 4=7.53%, 10=20.58%, 20=7.77%, 50=47.71% 00:30:18.072 lat (msec) : 100=4.44%, 250=0.93%, 500=0.03% 00:30:18.072 cpu : usr=99.29%, sys=0.14%, ctx=41, majf=0, minf=5585 00:30:18.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:18.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.072 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.072 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.072 second_half: (groupid=0, jobs=1): err= 0: pid=75506: Wed Nov 27 04:52:22 2024 00:30:18.072 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(255MiB/22988msec) 00:30:18.072 slat (nsec): min=3128, max=51871, avg=4255.73, stdev=1035.38 00:30:18.072 clat (usec): min=663, max=334332, avg=35041.05, stdev=20600.71 00:30:18.072 lat (usec): min=667, max=334337, avg=35045.31, stdev=20600.65 00:30:18.072 clat percentiles (msec): 00:30:18.072 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:30:18.072 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:30:18.072 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 52], 00:30:18.072 | 99.00th=[ 150], 99.50th=[ 171], 99.90th=[ 213], 99.95th=[ 243], 00:30:18.072 | 99.99th=[ 330] 00:30:18.072 write: IOPS=3621, BW=14.1MiB/s (14.8MB/s)(256MiB/18095msec); 0 zone resets 00:30:18.072 slat (usec): min=3, max=1176, avg= 5.72, stdev= 6.03 00:30:18.072 clat (usec): min=344, max=85120, avg=9959.71, stdev=15463.40 00:30:18.072 lat (usec): min=355, max=85126, avg=9965.43, stdev=15463.56 00:30:18.072 clat percentiles (usec): 00:30:18.072 | 1.00th=[ 766], 5.00th=[ 996], 10.00th=[ 1156], 20.00th=[ 1418], 00:30:18.072 | 30.00th=[ 2933], 40.00th=[ 4424], 50.00th=[ 5145], 60.00th=[ 5866], 00:30:18.072 | 70.00th=[ 7111], 80.00th=[11731], 90.00th=[21365], 95.00th=[53216], 00:30:18.072 | 99.00th=[72877], 99.50th=[74974], 99.90th=[79168], 99.95th=[81265], 00:30:18.072 | 99.99th=[84411] 00:30:18.072 bw ( KiB/s): min= 680, max=60112, per=92.07%, avg=23831.27, stdev=16454.98, samples=22 00:30:18.072 iops : min= 170, max=15028, avg=5957.82, stdev=4113.75, samples=22 00:30:18.072 lat (usec) : 500=0.02%, 750=0.41%, 1000=2.17% 00:30:18.072 lat (msec) : 2=10.41%, 4=5.44%, 10=20.56%, 20=7.33%, 50=48.57% 00:30:18.072 lat (msec) : 100=3.86%, 250=1.20%, 500=0.02% 00:30:18.072 cpu : usr=99.39%, sys=0.15%, ctx=50, majf=0, minf=5532 00:30:18.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:18.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.072 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.072 issued rwts: total=65288,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.072 00:30:18.072 Run status group 0 (all jobs): 00:30:18.072 READ: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=510MiB (535MB), run=22837-22988msec 00:30:18.072 WRITE: bw=25.3MiB/s (26.5MB/s), 12.6MiB/s-14.1MiB/s (13.3MB/s-14.8MB/s), io=512MiB (537MB), run=18095-20255msec 00:30:18.072 ----------------------------------------------------- 00:30:18.072 Suppressions used: 00:30:18.072 count bytes template 00:30:18.072 2 10 /usr/src/fio/parse.c 00:30:18.072 4 384 /usr/src/fio/iolog.c 00:30:18.072 1 8 libtcmalloc_minimal.so 00:30:18.072 1 904 libcrypto.so 00:30:18.072 ----------------------------------------------------- 00:30:18.072 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:30:18.072 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:18.073 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.073 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.073 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:30:18.073 04:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:18.073 04:52:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:18.073 04:52:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:18.073 04:52:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:30:18.073 04:52:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:18.073 04:52:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:18.073 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:18.073 fio-3.35 00:30:18.073 Starting 1 thread 00:30:36.179 00:30:36.179 test: (groupid=0, jobs=1): err= 0: pid=75809: Wed Nov 27 04:52:42 2024 00:30:36.179 read: IOPS=7405, BW=28.9MiB/s (30.3MB/s)(255MiB/8805msec) 00:30:36.179 slat (nsec): min=3093, max=33493, avg=4951.30, stdev=1110.79 00:30:36.179 clat (usec): min=555, max=32258, avg=17276.59, stdev=2820.36 00:30:36.179 lat (usec): min=571, max=32262, avg=17281.54, stdev=2820.40 00:30:36.179 clat percentiles (usec): 00:30:36.179 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14877], 20.00th=[15008], 00:30:36.179 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16188], 60.00th=[16909], 00:30:36.179 | 70.00th=[17957], 80.00th=[19530], 90.00th=[21365], 95.00th=[23200], 00:30:36.179 | 99.00th=[26084], 99.50th=[26870], 99.90th=[30016], 99.95th=[31589], 00:30:36.179 | 99.99th=[31851] 00:30:36.179 write: IOPS=9268, BW=36.2MiB/s (38.0MB/s)(256MiB/7071msec); 0 zone resets 00:30:36.179 slat (usec): min=4, max=347, avg= 7.28, stdev= 4.89 00:30:36.179 clat (usec): min=533, max=779162, avg=13739.99, stdev=36333.94 00:30:36.179 lat (usec): min=539, max=779168, avg=13747.27, stdev=36333.96 00:30:36.179 clat percentiles (usec): 00:30:36.179 | 1.00th=[ 930], 5.00th=[ 1237], 10.00th=[ 1467], 20.00th=[ 1827], 00:30:36.179 | 30.00th=[ 2278], 40.00th=[ 4752], 50.00th=[ 6521], 60.00th=[ 8455], 00:30:36.179 | 70.00th=[ 11338], 80.00th=[ 16450], 90.00th=[ 43254], 95.00th=[ 49546], 00:30:36.179 | 99.00th=[ 57934], 99.50th=[ 60031], 99.90th=[759170], 99.95th=[767558], 00:30:36.179 | 99.99th=[775947] 00:30:36.179 bw ( KiB/s): min= 4784, max=67784, per=94.25%, avg=34943.40, stdev=16800.85, samples=15 00:30:36.179 iops : min= 1196, max=16946, avg=8735.80, stdev=4200.22, samples=15 00:30:36.179 lat (usec) : 750=0.14%, 1000=0.70% 00:30:36.179 lat (msec) : 2=11.51%, 4=7.14%, 10=13.57%, 20=50.15%, 50=14.51% 00:30:36.179 lat (msec) : 100=2.19%, 750=0.03%, 1000=0.07% 00:30:36.179 cpu : usr=99.02%, sys=0.23%, ctx=30, majf=0, minf=5565 00:30:36.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:36.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.179 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.179 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.179 00:30:36.179 Run status group 0 (all jobs): 00:30:36.179 READ: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=255MiB (267MB), run=8805-8805msec 00:30:36.179 WRITE: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=256MiB (268MB), run=7071-7071msec 00:30:36.438 ----------------------------------------------------- 00:30:36.438 Suppressions used: 00:30:36.438 count bytes template 00:30:36.438 1 5 /usr/src/fio/parse.c 00:30:36.438 2 192 /usr/src/fio/iolog.c 00:30:36.438 1 8 libtcmalloc_minimal.so 00:30:36.438 1 904 libcrypto.so 00:30:36.438 ----------------------------------------------------- 00:30:36.438 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:36.438 Remove shared memory files 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57141 /dev/shm/spdk_tgt_trace.pid74135 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:30:36.438 00:30:36.438 real 1m5.796s 00:30:36.438 user 2m20.218s 00:30:36.438 sys 0m2.910s 00:30:36.438 ************************************ 00:30:36.438 END TEST ftl_fio_basic 00:30:36.438 ************************************ 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.438 04:52:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:36.439 04:52:43 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:30:36.439 04:52:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.439 04:52:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.439 04:52:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:36.439 ************************************ 00:30:36.439 START TEST ftl_bdevperf 00:30:36.439 ************************************ 00:30:36.439 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:30:36.439 * Looking for test storage... 00:30:36.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:36.439 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.439 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.439 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.698 --rc genhtml_branch_coverage=1 00:30:36.698 --rc genhtml_function_coverage=1 00:30:36.698 --rc genhtml_legend=1 00:30:36.698 --rc geninfo_all_blocks=1 00:30:36.698 --rc geninfo_unexecuted_blocks=1 00:30:36.698 00:30:36.698 ' 00:30:36.698 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.698 --rc genhtml_branch_coverage=1 00:30:36.698 --rc genhtml_function_coverage=1 00:30:36.698 --rc genhtml_legend=1 00:30:36.698 --rc geninfo_all_blocks=1 00:30:36.698 --rc geninfo_unexecuted_blocks=1 00:30:36.698 00:30:36.699 ' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.699 --rc genhtml_branch_coverage=1 00:30:36.699 --rc genhtml_function_coverage=1 00:30:36.699 --rc genhtml_legend=1 00:30:36.699 --rc geninfo_all_blocks=1 00:30:36.699 --rc geninfo_unexecuted_blocks=1 00:30:36.699 00:30:36.699 ' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.699 --rc genhtml_branch_coverage=1 00:30:36.699 --rc genhtml_function_coverage=1 00:30:36.699 --rc genhtml_legend=1 00:30:36.699 --rc geninfo_all_blocks=1 00:30:36.699 --rc geninfo_unexecuted_blocks=1 00:30:36.699 00:30:36.699 ' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76069 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76069 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76069 ']' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.699 04:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:36.699 [2024-11-27 04:52:43.779732] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:30:36.699 [2024-11-27 04:52:43.779848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 00:30:36.958 [2024-11-27 04:52:43.934807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.958 [2024-11-27 04:52:44.012566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.524 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:30:37.525 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:30:37.783 04:52:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:38.044 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:38.044 { 00:30:38.044 "name": "nvme0n1", 00:30:38.044 "aliases": [ 00:30:38.044 "a22af595-780c-4484-a8ee-4724ecb03a53" 00:30:38.044 ], 00:30:38.044 "product_name": "NVMe disk", 00:30:38.044 "block_size": 4096, 00:30:38.044 "num_blocks": 1310720, 00:30:38.044 "uuid": "a22af595-780c-4484-a8ee-4724ecb03a53", 00:30:38.044 "numa_id": -1, 00:30:38.044 "assigned_rate_limits": { 00:30:38.044 "rw_ios_per_sec": 0, 00:30:38.044 "rw_mbytes_per_sec": 0, 00:30:38.044 "r_mbytes_per_sec": 0, 00:30:38.044 "w_mbytes_per_sec": 0 00:30:38.044 }, 00:30:38.044 "claimed": true, 00:30:38.044 "claim_type": "read_many_write_one", 00:30:38.044 "zoned": false, 00:30:38.044 "supported_io_types": { 00:30:38.044 "read": true, 00:30:38.044 "write": true, 00:30:38.044 "unmap": true, 00:30:38.044 "flush": true, 00:30:38.044 "reset": true, 00:30:38.044 "nvme_admin": true, 00:30:38.044 "nvme_io": true, 00:30:38.044 "nvme_io_md": false, 00:30:38.044 "write_zeroes": true, 00:30:38.044 "zcopy": false, 00:30:38.044 "get_zone_info": false, 00:30:38.044 "zone_management": false, 00:30:38.044 "zone_append": false, 00:30:38.044 "compare": true, 00:30:38.044 "compare_and_write": false, 00:30:38.044 "abort": true, 00:30:38.044 "seek_hole": false, 00:30:38.044 "seek_data": false, 00:30:38.044 "copy": true, 00:30:38.044 "nvme_iov_md": false 00:30:38.044 }, 00:30:38.044 "driver_specific": { 00:30:38.044 "nvme": [ 00:30:38.044 { 00:30:38.044 "pci_address": "0000:00:11.0", 00:30:38.044 "trid": { 00:30:38.044 "trtype": "PCIe", 00:30:38.044 "traddr": "0000:00:11.0" 00:30:38.044 }, 00:30:38.044 "ctrlr_data": { 00:30:38.044 "cntlid": 0, 00:30:38.045 "vendor_id": "0x1b36", 00:30:38.045 "model_number": "QEMU NVMe Ctrl", 00:30:38.045 "serial_number": "12341", 00:30:38.045 "firmware_revision": "8.0.0", 00:30:38.045 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:38.045 "oacs": { 00:30:38.045 "security": 0, 00:30:38.045 "format": 1, 00:30:38.045 "firmware": 0, 00:30:38.045 "ns_manage": 1 00:30:38.045 }, 00:30:38.045 "multi_ctrlr": false, 00:30:38.045 "ana_reporting": false 00:30:38.045 }, 00:30:38.045 "vs": { 00:30:38.045 "nvme_version": "1.4" 00:30:38.045 }, 00:30:38.045 "ns_data": { 00:30:38.045 "id": 1, 00:30:38.045 "can_share": false 00:30:38.045 } 00:30:38.045 } 00:30:38.045 ], 00:30:38.045 "mp_policy": "active_passive" 00:30:38.045 } 00:30:38.045 } 00:30:38.045 ]' 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:38.045 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:38.305 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e2a2ea63-b552-4617-a061-20a80bb3ab9d 00:30:38.305 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:30:38.305 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2a2ea63-b552-4617-a061-20a80bb3ab9d 00:30:38.564 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:38.824 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b69b9a88-5364-42b9-bf4b-d63e0568c16b 00:30:38.824 04:52:45 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b69b9a88-5364-42b9-bf4b-d63e0568c16b 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:39.085 { 00:30:39.085 "name": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:39.085 "aliases": [ 00:30:39.085 "lvs/nvme0n1p0" 00:30:39.085 ], 00:30:39.085 "product_name": "Logical Volume", 00:30:39.085 "block_size": 4096, 00:30:39.085 "num_blocks": 26476544, 00:30:39.085 "uuid": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:39.085 "assigned_rate_limits": { 00:30:39.085 "rw_ios_per_sec": 0, 00:30:39.085 "rw_mbytes_per_sec": 0, 00:30:39.085 "r_mbytes_per_sec": 0, 00:30:39.085 "w_mbytes_per_sec": 0 00:30:39.085 }, 00:30:39.085 "claimed": false, 00:30:39.085 "zoned": false, 00:30:39.085 "supported_io_types": { 00:30:39.085 "read": true, 00:30:39.085 "write": true, 00:30:39.085 "unmap": true, 00:30:39.085 "flush": false, 00:30:39.085 "reset": true, 00:30:39.085 "nvme_admin": false, 00:30:39.085 "nvme_io": false, 00:30:39.085 "nvme_io_md": false, 00:30:39.085 "write_zeroes": true, 00:30:39.085 "zcopy": false, 00:30:39.085 "get_zone_info": false, 00:30:39.085 "zone_management": false, 00:30:39.085 "zone_append": false, 00:30:39.085 "compare": false, 00:30:39.085 "compare_and_write": false, 00:30:39.085 "abort": false, 00:30:39.085 "seek_hole": true, 00:30:39.085 "seek_data": true, 00:30:39.085 "copy": false, 00:30:39.085 "nvme_iov_md": false 00:30:39.085 }, 00:30:39.085 "driver_specific": { 00:30:39.085 "lvol": { 00:30:39.085 "lvol_store_uuid": "b69b9a88-5364-42b9-bf4b-d63e0568c16b", 00:30:39.085 "base_bdev": "nvme0n1", 00:30:39.085 "thin_provision": true, 00:30:39.085 "num_allocated_clusters": 0, 00:30:39.085 "snapshot": false, 00:30:39.085 "clone": false, 00:30:39.085 "esnap_clone": false 00:30:39.085 } 00:30:39.085 } 00:30:39.085 } 00:30:39.085 ]' 00:30:39.085 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:30:39.345 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:39.605 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:39.605 { 00:30:39.605 "name": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:39.605 "aliases": [ 00:30:39.605 "lvs/nvme0n1p0" 00:30:39.605 ], 00:30:39.605 "product_name": "Logical Volume", 00:30:39.605 "block_size": 4096, 00:30:39.605 "num_blocks": 26476544, 00:30:39.605 "uuid": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:39.605 "assigned_rate_limits": { 00:30:39.605 "rw_ios_per_sec": 0, 00:30:39.605 "rw_mbytes_per_sec": 0, 00:30:39.605 "r_mbytes_per_sec": 0, 00:30:39.605 "w_mbytes_per_sec": 0 00:30:39.605 }, 00:30:39.605 "claimed": false, 00:30:39.605 "zoned": false, 00:30:39.605 "supported_io_types": { 00:30:39.605 "read": true, 00:30:39.605 "write": true, 00:30:39.605 "unmap": true, 00:30:39.605 "flush": false, 00:30:39.605 "reset": true, 00:30:39.605 "nvme_admin": false, 00:30:39.605 "nvme_io": false, 00:30:39.605 "nvme_io_md": false, 00:30:39.605 "write_zeroes": true, 00:30:39.605 "zcopy": false, 00:30:39.605 "get_zone_info": false, 00:30:39.605 "zone_management": false, 00:30:39.605 "zone_append": false, 00:30:39.605 "compare": false, 00:30:39.605 "compare_and_write": false, 00:30:39.605 "abort": false, 00:30:39.605 "seek_hole": true, 00:30:39.605 "seek_data": true, 00:30:39.605 "copy": false, 00:30:39.605 "nvme_iov_md": false 00:30:39.605 }, 00:30:39.605 "driver_specific": { 00:30:39.605 "lvol": { 00:30:39.605 "lvol_store_uuid": "b69b9a88-5364-42b9-bf4b-d63e0568c16b", 00:30:39.605 "base_bdev": "nvme0n1", 00:30:39.605 "thin_provision": true, 00:30:39.606 "num_allocated_clusters": 0, 00:30:39.606 "snapshot": false, 00:30:39.606 "clone": false, 00:30:39.606 "esnap_clone": false 00:30:39.606 } 00:30:39.606 } 00:30:39.606 } 00:30:39.606 ]' 00:30:39.606 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:30:39.866 04:52:46 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 00:30:40.127 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:40.127 { 00:30:40.127 "name": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:40.127 "aliases": [ 00:30:40.127 "lvs/nvme0n1p0" 00:30:40.127 ], 00:30:40.127 "product_name": "Logical Volume", 00:30:40.127 "block_size": 4096, 00:30:40.127 "num_blocks": 26476544, 00:30:40.127 "uuid": "9da6ff07-7e96-4fa8-88a2-7a8d6fffc527", 00:30:40.127 "assigned_rate_limits": { 00:30:40.127 "rw_ios_per_sec": 0, 00:30:40.127 "rw_mbytes_per_sec": 0, 00:30:40.127 "r_mbytes_per_sec": 0, 00:30:40.127 "w_mbytes_per_sec": 0 00:30:40.127 }, 00:30:40.127 "claimed": false, 00:30:40.127 "zoned": false, 00:30:40.127 "supported_io_types": { 00:30:40.127 "read": true, 00:30:40.127 "write": true, 00:30:40.127 "unmap": true, 00:30:40.127 "flush": false, 00:30:40.127 "reset": true, 00:30:40.127 "nvme_admin": false, 00:30:40.127 "nvme_io": false, 00:30:40.128 "nvme_io_md": false, 00:30:40.128 "write_zeroes": true, 00:30:40.128 "zcopy": false, 00:30:40.128 "get_zone_info": false, 00:30:40.128 "zone_management": false, 00:30:40.128 "zone_append": false, 00:30:40.128 "compare": false, 00:30:40.128 "compare_and_write": false, 00:30:40.128 "abort": false, 00:30:40.128 "seek_hole": true, 00:30:40.128 "seek_data": true, 00:30:40.128 "copy": false, 00:30:40.128 "nvme_iov_md": false 00:30:40.128 }, 00:30:40.128 "driver_specific": { 00:30:40.128 "lvol": { 00:30:40.128 "lvol_store_uuid": "b69b9a88-5364-42b9-bf4b-d63e0568c16b", 00:30:40.128 "base_bdev": "nvme0n1", 00:30:40.128 "thin_provision": true, 00:30:40.128 "num_allocated_clusters": 0, 00:30:40.128 "snapshot": false, 00:30:40.128 "clone": false, 00:30:40.128 "esnap_clone": false 00:30:40.128 } 00:30:40.128 } 00:30:40.128 } 00:30:40.128 ]' 00:30:40.128 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:30:40.389 04:52:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9da6ff07-7e96-4fa8-88a2-7a8d6fffc527 -c nvc0n1p0 --l2p_dram_limit 20 00:30:40.389 [2024-11-27 04:52:47.578541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.389 [2024-11-27 04:52:47.578625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:40.389 [2024-11-27 04:52:47.578641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:40.389 [2024-11-27 04:52:47.578653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.389 [2024-11-27 04:52:47.578728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.389 [2024-11-27 04:52:47.578742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:40.389 [2024-11-27 04:52:47.578751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:40.389 [2024-11-27 04:52:47.578762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.389 [2024-11-27 04:52:47.578785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:40.389 [2024-11-27 04:52:47.579674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:40.389 [2024-11-27 04:52:47.579714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.389 [2024-11-27 04:52:47.579726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:40.389 [2024-11-27 04:52:47.579736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:30:40.389 [2024-11-27 04:52:47.579746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.389 [2024-11-27 04:52:47.579784] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 32aa0dcd-9051-45a1-aa4b-1d7d89be2413 00:30:40.389 [2024-11-27 04:52:47.581627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.389 [2024-11-27 04:52:47.581678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:40.389 [2024-11-27 04:52:47.581698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:40.389 [2024-11-27 04:52:47.581706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.650 [2024-11-27 04:52:47.590953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.650 [2024-11-27 04:52:47.591002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:40.650 [2024-11-27 04:52:47.591016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.162 ms 00:30:40.650 [2024-11-27 04:52:47.591027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.591154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.591167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:40.651 [2024-11-27 04:52:47.591183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:40.651 [2024-11-27 04:52:47.591191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.591256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.591267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:40.651 [2024-11-27 04:52:47.591279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:40.651 [2024-11-27 04:52:47.591287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.591313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:40.651 [2024-11-27 04:52:47.595683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.595731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:40.651 [2024-11-27 04:52:47.595742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:30:40.651 [2024-11-27 04:52:47.595758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.595798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.595810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:40.651 [2024-11-27 04:52:47.595819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:40.651 [2024-11-27 04:52:47.595830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.595867] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:40.651 [2024-11-27 04:52:47.596018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:40.651 [2024-11-27 04:52:47.596032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:40.651 [2024-11-27 04:52:47.596046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:40.651 [2024-11-27 04:52:47.596057] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596085] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596097] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:40.651 [2024-11-27 04:52:47.596107] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:40.651 [2024-11-27 04:52:47.596117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:40.651 [2024-11-27 04:52:47.596127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:40.651 [2024-11-27 04:52:47.596139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.596149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:40.651 [2024-11-27 04:52:47.596157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:40.651 [2024-11-27 04:52:47.596168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.596250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.651 [2024-11-27 04:52:47.596262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:40.651 [2024-11-27 04:52:47.596272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:40.651 [2024-11-27 04:52:47.596284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.651 [2024-11-27 04:52:47.596378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:40.651 [2024-11-27 04:52:47.596403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:40.651 [2024-11-27 04:52:47.596411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:40.651 [2024-11-27 04:52:47.596440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:40.651 [2024-11-27 04:52:47.596465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:40.651 [2024-11-27 04:52:47.596482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:40.651 [2024-11-27 04:52:47.596499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:40.651 [2024-11-27 04:52:47.596505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:40.651 [2024-11-27 04:52:47.596515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:40.651 [2024-11-27 04:52:47.596522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:40.651 [2024-11-27 04:52:47.596533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:40.651 [2024-11-27 04:52:47.596550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:40.651 [2024-11-27 04:52:47.596574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:40.651 [2024-11-27 04:52:47.596598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:40.651 [2024-11-27 04:52:47.596621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:40.651 [2024-11-27 04:52:47.596650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:40.651 [2024-11-27 04:52:47.596681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:40.651 [2024-11-27 04:52:47.596702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:40.651 [2024-11-27 04:52:47.596710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:40.651 [2024-11-27 04:52:47.596719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:40.651 [2024-11-27 04:52:47.596728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:40.651 [2024-11-27 04:52:47.596736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:40.651 [2024-11-27 04:52:47.596745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:40.651 [2024-11-27 04:52:47.596762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:40.651 [2024-11-27 04:52:47.596770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:40.651 [2024-11-27 04:52:47.596788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:40.651 [2024-11-27 04:52:47.596797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.651 [2024-11-27 04:52:47.596818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:40.651 [2024-11-27 04:52:47.596825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:40.651 [2024-11-27 04:52:47.596834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:40.651 [2024-11-27 04:52:47.596840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:40.651 [2024-11-27 04:52:47.596849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:40.651 [2024-11-27 04:52:47.596856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:40.651 [2024-11-27 04:52:47.596869] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:40.651 [2024-11-27 04:52:47.596880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.651 [2024-11-27 04:52:47.596892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:40.651 [2024-11-27 04:52:47.596901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:40.651 [2024-11-27 04:52:47.596912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:40.651 [2024-11-27 04:52:47.596920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:40.651 [2024-11-27 04:52:47.596931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:40.651 [2024-11-27 04:52:47.596938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:40.651 [2024-11-27 04:52:47.596950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:40.651 [2024-11-27 04:52:47.596962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:40.652 [2024-11-27 04:52:47.596975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:40.652 [2024-11-27 04:52:47.596984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.596995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.597003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.597014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.597023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:40.652 [2024-11-27 04:52:47.597033] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:40.652 [2024-11-27 04:52:47.597043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.597057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:40.652 [2024-11-27 04:52:47.597078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:40.652 [2024-11-27 04:52:47.597089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:40.652 [2024-11-27 04:52:47.597099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:40.652 [2024-11-27 04:52:47.597110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.652 [2024-11-27 04:52:47.597118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:40.652 [2024-11-27 04:52:47.597129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:30:40.652 [2024-11-27 04:52:47.597138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.652 [2024-11-27 04:52:47.597179] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:40.652 [2024-11-27 04:52:47.597198] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:43.946 [2024-11-27 04:52:50.892107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.892190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:43.946 [2024-11-27 04:52:50.892209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3294.906 ms 00:30:43.946 [2024-11-27 04:52:50.892219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.920518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.920566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:43.946 [2024-11-27 04:52:50.920583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.085 ms 00:30:43.946 [2024-11-27 04:52:50.920591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.920715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.920726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:43.946 [2024-11-27 04:52:50.920740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:43.946 [2024-11-27 04:52:50.920748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.966698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.966745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:43.946 [2024-11-27 04:52:50.966762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.914 ms 00:30:43.946 [2024-11-27 04:52:50.966771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.966811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.966820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:43.946 [2024-11-27 04:52:50.966831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:43.946 [2024-11-27 04:52:50.966841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.967304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.967329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:43.946 [2024-11-27 04:52:50.967340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:30:43.946 [2024-11-27 04:52:50.967349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.967460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.967470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:43.946 [2024-11-27 04:52:50.967483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:43.946 [2024-11-27 04:52:50.967490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.981791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:50.981822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:43.946 [2024-11-27 04:52:50.981835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.280 ms 00:30:43.946 [2024-11-27 04:52:50.981850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:50.994103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:30:43.946 [2024-11-27 04:52:51.000293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.000327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:43.946 [2024-11-27 04:52:51.000338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.357 ms 00:30:43.946 [2024-11-27 04:52:51.000349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:51.073651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.073694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:43.946 [2024-11-27 04:52:51.073706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.277 ms 00:30:43.946 [2024-11-27 04:52:51.073717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:51.073901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.073918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:43.946 [2024-11-27 04:52:51.073926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:30:43.946 [2024-11-27 04:52:51.073939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:51.097742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.097777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:43.946 [2024-11-27 04:52:51.097789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.762 ms 00:30:43.946 [2024-11-27 04:52:51.097799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:51.120448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.120480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:43.946 [2024-11-27 04:52:51.120491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.618 ms 00:30:43.946 [2024-11-27 04:52:51.120501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.946 [2024-11-27 04:52:51.121079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.946 [2024-11-27 04:52:51.121102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:43.946 [2024-11-27 04:52:51.121112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:30:43.946 [2024-11-27 04:52:51.121122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.200578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.200627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:44.208 [2024-11-27 04:52:51.200641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.427 ms 00:30:44.208 [2024-11-27 04:52:51.200652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.225684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.225721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:44.208 [2024-11-27 04:52:51.225735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.963 ms 00:30:44.208 [2024-11-27 04:52:51.225744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.249530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.249567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:44.208 [2024-11-27 04:52:51.249578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.753 ms 00:30:44.208 [2024-11-27 04:52:51.249588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.272942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.272982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:44.208 [2024-11-27 04:52:51.272994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.322 ms 00:30:44.208 [2024-11-27 04:52:51.273004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.273041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.273056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:44.208 [2024-11-27 04:52:51.273073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:44.208 [2024-11-27 04:52:51.273083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.273168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.208 [2024-11-27 04:52:51.273181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:44.208 [2024-11-27 04:52:51.273190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:44.208 [2024-11-27 04:52:51.273200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.208 [2024-11-27 04:52:51.274260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3695.261 ms, result 0 00:30:44.208 { 00:30:44.208 "name": "ftl0", 00:30:44.208 "uuid": "32aa0dcd-9051-45a1-aa4b-1d7d89be2413" 00:30:44.208 } 00:30:44.208 04:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:30:44.208 04:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:30:44.208 04:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:30:44.484 04:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:30:44.484 [2024-11-27 04:52:51.586413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:44.484 I/O size of 69632 is greater than zero copy threshold (65536). 00:30:44.484 Zero copy mechanism will not be used. 00:30:44.484 Running I/O for 4 seconds... 00:30:46.804 1324.00 IOPS, 87.92 MiB/s [2024-11-27T04:52:54.946Z] 1333.00 IOPS, 88.52 MiB/s [2024-11-27T04:52:55.889Z] 1209.67 IOPS, 80.33 MiB/s [2024-11-27T04:52:55.889Z] 1096.25 IOPS, 72.80 MiB/s 00:30:48.686 Latency(us) 00:30:48.686 [2024-11-27T04:52:55.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.686 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:30:48.686 ftl0 : 4.00 1095.96 72.78 0.00 0.00 961.84 209.53 3213.78 00:30:48.686 [2024-11-27T04:52:55.889Z] =================================================================================================================== 00:30:48.686 [2024-11-27T04:52:55.889Z] Total : 1095.96 72.78 0.00 0.00 961.84 209.53 3213.78 00:30:48.686 [2024-11-27 04:52:55.597051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:48.686 { 00:30:48.686 "results": [ 00:30:48.686 { 00:30:48.686 "job": "ftl0", 00:30:48.686 "core_mask": "0x1", 00:30:48.686 "workload": "randwrite", 00:30:48.686 "status": "finished", 00:30:48.686 "queue_depth": 1, 00:30:48.686 "io_size": 69632, 00:30:48.686 "runtime": 4.001961, 00:30:48.686 "iops": 1095.9627042842246, 00:30:48.686 "mibps": 72.7787733313743, 00:30:48.686 "io_failed": 0, 00:30:48.686 "io_timeout": 0, 00:30:48.686 "avg_latency_us": 961.8409119927042, 00:30:48.686 "min_latency_us": 209.52615384615385, 00:30:48.686 "max_latency_us": 3213.7846153846153 00:30:48.686 } 00:30:48.686 ], 00:30:48.686 "core_count": 1 00:30:48.686 } 00:30:48.686 04:52:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:30:48.686 [2024-11-27 04:52:55.722232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:48.686 Running I/O for 4 seconds... 00:30:50.564 5211.00 IOPS, 20.36 MiB/s [2024-11-27T04:52:59.148Z] 5356.50 IOPS, 20.92 MiB/s [2024-11-27T04:53:00.084Z] 5523.00 IOPS, 21.57 MiB/s [2024-11-27T04:53:00.084Z] 5541.75 IOPS, 21.65 MiB/s 00:30:52.881 Latency(us) 00:30:52.881 [2024-11-27T04:53:00.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.881 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.881 ftl0 : 4.03 5532.02 21.61 0.00 0.00 23053.21 340.28 59284.87 00:30:52.881 [2024-11-27T04:53:00.084Z] =================================================================================================================== 00:30:52.881 [2024-11-27T04:53:00.084Z] Total : 5532.02 21.61 0.00 0.00 23053.21 0.00 59284.87 00:30:52.881 [2024-11-27 04:52:59.762803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:52.881 { 00:30:52.881 "results": [ 00:30:52.881 { 00:30:52.881 "job": "ftl0", 00:30:52.881 "core_mask": "0x1", 00:30:52.881 "workload": "randwrite", 00:30:52.881 "status": "finished", 00:30:52.881 "queue_depth": 128, 00:30:52.881 "io_size": 4096, 00:30:52.881 "runtime": 4.030176, 00:30:52.881 "iops": 5532.016467767165, 00:30:52.881 "mibps": 21.609439327215487, 00:30:52.881 "io_failed": 0, 00:30:52.881 "io_timeout": 0, 00:30:52.881 "avg_latency_us": 23053.205990994877, 00:30:52.881 "min_latency_us": 340.2830769230769, 00:30:52.881 "max_latency_us": 59284.873846153845 00:30:52.881 } 00:30:52.881 ], 00:30:52.881 "core_count": 1 00:30:52.881 } 00:30:52.881 04:52:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:30:52.881 [2024-11-27 04:52:59.864649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:52.881 Running I/O for 4 seconds... 00:30:54.759 5228.00 IOPS, 20.42 MiB/s [2024-11-27T04:53:02.901Z] 5110.00 IOPS, 19.96 MiB/s [2024-11-27T04:53:04.290Z] 5071.33 IOPS, 19.81 MiB/s [2024-11-27T04:53:04.290Z] 4998.00 IOPS, 19.52 MiB/s 00:30:57.087 Latency(us) 00:30:57.087 [2024-11-27T04:53:04.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.087 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:57.087 Verification LBA range: start 0x0 length 0x1400000 00:30:57.087 ftl0 : 4.02 5004.72 19.55 0.00 0.00 25490.67 318.23 37708.41 00:30:57.087 [2024-11-27T04:53:04.290Z] =================================================================================================================== 00:30:57.087 [2024-11-27T04:53:04.290Z] Total : 5004.72 19.55 0.00 0.00 25490.67 0.00 37708.41 00:30:57.087 [2024-11-27 04:53:03.897976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:57.087 { 00:30:57.087 "results": [ 00:30:57.087 { 00:30:57.087 "job": "ftl0", 00:30:57.087 "core_mask": "0x1", 00:30:57.087 "workload": "verify", 00:30:57.087 "status": "finished", 00:30:57.087 "verify_range": { 00:30:57.087 "start": 0, 00:30:57.087 "length": 20971520 00:30:57.087 }, 00:30:57.087 "queue_depth": 128, 00:30:57.087 "io_size": 4096, 00:30:57.087 "runtime": 4.018005, 00:30:57.087 "iops": 5004.722492878928, 00:30:57.087 "mibps": 19.549697237808314, 00:30:57.087 "io_failed": 0, 00:30:57.087 "io_timeout": 0, 00:30:57.087 "avg_latency_us": 25490.667532103882, 00:30:57.087 "min_latency_us": 318.2276923076923, 00:30:57.087 "max_latency_us": 37708.406153846154 00:30:57.087 } 00:30:57.087 ], 00:30:57.087 "core_count": 1 00:30:57.087 } 00:30:57.087 04:53:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:30:57.087 [2024-11-27 04:53:04.113205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.087 [2024-11-27 04:53:04.113279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.087 [2024-11-27 04:53:04.113295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.087 [2024-11-27 04:53:04.113308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.087 [2024-11-27 04:53:04.113332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.087 [2024-11-27 04:53:04.116364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.087 [2024-11-27 04:53:04.116419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.087 [2024-11-27 04:53:04.116434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:30:57.087 [2024-11-27 04:53:04.116443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.087 [2024-11-27 04:53:04.119935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.087 [2024-11-27 04:53:04.119982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.088 [2024-11-27 04:53:04.119998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:30:57.088 [2024-11-27 04:53:04.120007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.350 [2024-11-27 04:53:04.331397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.350 [2024-11-27 04:53:04.331468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:57.350 [2024-11-27 04:53:04.331490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.356 ms 00:30:57.350 [2024-11-27 04:53:04.331499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.350 [2024-11-27 04:53:04.337740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.350 [2024-11-27 04:53:04.337779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:57.350 [2024-11-27 04:53:04.337795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.195 ms 00:30:57.350 [2024-11-27 04:53:04.337807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.350 [2024-11-27 04:53:04.363869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.363917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:57.351 [2024-11-27 04:53:04.363934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.992 ms 00:30:57.351 [2024-11-27 04:53:04.363943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.381397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.381449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:57.351 [2024-11-27 04:53:04.381465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.401 ms 00:30:57.351 [2024-11-27 04:53:04.381475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.381639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.381651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:57.351 [2024-11-27 04:53:04.381666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:30:57.351 [2024-11-27 04:53:04.381675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.407164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.407213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:57.351 [2024-11-27 04:53:04.407229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.468 ms 00:30:57.351 [2024-11-27 04:53:04.407237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.431787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.431850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:57.351 [2024-11-27 04:53:04.431866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.499 ms 00:30:57.351 [2024-11-27 04:53:04.431875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.456301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.456344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:57.351 [2024-11-27 04:53:04.456359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.378 ms 00:30:57.351 [2024-11-27 04:53:04.456367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.480766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.351 [2024-11-27 04:53:04.480810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:57.351 [2024-11-27 04:53:04.480828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.309 ms 00:30:57.351 [2024-11-27 04:53:04.480835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.351 [2024-11-27 04:53:04.482490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.351 [2024-11-27 04:53:04.482531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.482994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.351 [2024-11-27 04:53:04.483152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.352 [2024-11-27 04:53:04.483480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.352 [2024-11-27 04:53:04.483491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32aa0dcd-9051-45a1-aa4b-1d7d89be2413 00:30:57.352 [2024-11-27 04:53:04.483503] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.352 [2024-11-27 04:53:04.483512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:57.352 [2024-11-27 04:53:04.483520] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.352 [2024-11-27 04:53:04.483531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.352 [2024-11-27 04:53:04.483538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.352 [2024-11-27 04:53:04.483548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.352 [2024-11-27 04:53:04.483556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.352 [2024-11-27 04:53:04.483566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.352 [2024-11-27 04:53:04.483573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.352 [2024-11-27 04:53:04.483583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.352 [2024-11-27 04:53:04.483591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.352 [2024-11-27 04:53:04.483602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:30:57.352 [2024-11-27 04:53:04.483609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.497680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.352 [2024-11-27 04:53:04.497721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.352 [2024-11-27 04:53:04.497734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:30:57.352 [2024-11-27 04:53:04.497742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.498150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.352 [2024-11-27 04:53:04.498160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.352 [2024-11-27 04:53:04.498171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:30:57.352 [2024-11-27 04:53:04.498180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.536815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.352 [2024-11-27 04:53:04.536872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.352 [2024-11-27 04:53:04.536890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.352 [2024-11-27 04:53:04.536899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.536983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.352 [2024-11-27 04:53:04.536993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.352 [2024-11-27 04:53:04.537004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.352 [2024-11-27 04:53:04.537013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.537151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.352 [2024-11-27 04:53:04.537163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.352 [2024-11-27 04:53:04.537174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.352 [2024-11-27 04:53:04.537182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.352 [2024-11-27 04:53:04.537202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.352 [2024-11-27 04:53:04.537211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.352 [2024-11-27 04:53:04.537222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.352 [2024-11-27 04:53:04.537229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.623169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.623459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.614 [2024-11-27 04:53:04.623493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.623501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.614 [2024-11-27 04:53:04.693203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.614 [2024-11-27 04:53:04.693329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.614 [2024-11-27 04:53:04.693439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.614 [2024-11-27 04:53:04.693578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.614 [2024-11-27 04:53:04.693645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.614 [2024-11-27 04:53:04.693716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.614 [2024-11-27 04:53:04.693791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.614 [2024-11-27 04:53:04.693801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.614 [2024-11-27 04:53:04.693809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.614 [2024-11-27 04:53:04.693956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 580.707 ms, result 0 00:30:57.614 true 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76069 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76069 ']' 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76069 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76069 00:30:57.614 killing process with pid 76069 00:30:57.614 Received shutdown signal, test time was about 4.000000 seconds 00:30:57.614 00:30:57.614 Latency(us) 00:30:57.614 [2024-11-27T04:53:04.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.614 [2024-11-27T04:53:04.817Z] =================================================================================================================== 00:30:57.614 [2024-11-27T04:53:04.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76069' 00:30:57.614 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76069 00:30:57.615 04:53:04 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76069 00:31:02.913 Remove shared memory files 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:31:02.913 ************************************ 00:31:02.913 END TEST ftl_bdevperf 00:31:02.913 ************************************ 00:31:02.913 00:31:02.913 real 0m26.433s 00:31:02.913 user 0m29.022s 00:31:02.913 sys 0m0.978s 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.913 04:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.913 04:53:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:31:02.913 04:53:10 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:02.913 04:53:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.913 04:53:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:02.913 ************************************ 00:31:02.913 START TEST ftl_trim 00:31:02.913 ************************************ 00:31:02.913 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:31:03.174 * Looking for test storage... 00:31:03.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.174 04:53:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:03.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.174 --rc genhtml_branch_coverage=1 00:31:03.174 --rc genhtml_function_coverage=1 00:31:03.174 --rc genhtml_legend=1 00:31:03.174 --rc geninfo_all_blocks=1 00:31:03.174 --rc geninfo_unexecuted_blocks=1 00:31:03.174 00:31:03.174 ' 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:03.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.174 --rc genhtml_branch_coverage=1 00:31:03.174 --rc genhtml_function_coverage=1 00:31:03.174 --rc genhtml_legend=1 00:31:03.174 --rc geninfo_all_blocks=1 00:31:03.174 --rc geninfo_unexecuted_blocks=1 00:31:03.174 00:31:03.174 ' 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:03.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.174 --rc genhtml_branch_coverage=1 00:31:03.174 --rc genhtml_function_coverage=1 00:31:03.174 --rc genhtml_legend=1 00:31:03.174 --rc geninfo_all_blocks=1 00:31:03.174 --rc geninfo_unexecuted_blocks=1 00:31:03.174 00:31:03.174 ' 00:31:03.174 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:03.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.174 --rc genhtml_branch_coverage=1 00:31:03.174 --rc genhtml_function_coverage=1 00:31:03.174 --rc genhtml_legend=1 00:31:03.174 --rc geninfo_all_blocks=1 00:31:03.174 --rc geninfo_unexecuted_blocks=1 00:31:03.174 00:31:03.174 ' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:31:03.174 04:53:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76416 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:31:03.175 04:53:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76416 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76416 ']' 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.175 04:53:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:03.175 [2024-11-27 04:53:10.298308] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:31:03.175 [2024-11-27 04:53:10.298597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76416 ] 00:31:03.435 [2024-11-27 04:53:10.453231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:03.435 [2024-11-27 04:53:10.555504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.435 [2024-11-27 04:53:10.555772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.435 [2024-11-27 04:53:10.555851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.006 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.006 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:31:04.006 04:53:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:04.267 04:53:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:04.267 04:53:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:31:04.267 04:53:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:04.267 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:04.267 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.267 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:04.267 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:04.267 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.528 { 00:31:04.528 "name": "nvme0n1", 00:31:04.528 "aliases": [ 00:31:04.528 "f34b6dfc-cd87-436a-b44b-a844022bf34a" 00:31:04.528 ], 00:31:04.528 "product_name": "NVMe disk", 00:31:04.528 "block_size": 4096, 00:31:04.528 "num_blocks": 1310720, 00:31:04.528 "uuid": "f34b6dfc-cd87-436a-b44b-a844022bf34a", 00:31:04.528 "numa_id": -1, 00:31:04.528 "assigned_rate_limits": { 00:31:04.528 "rw_ios_per_sec": 0, 00:31:04.528 "rw_mbytes_per_sec": 0, 00:31:04.528 "r_mbytes_per_sec": 0, 00:31:04.528 "w_mbytes_per_sec": 0 00:31:04.528 }, 00:31:04.528 "claimed": true, 00:31:04.528 "claim_type": "read_many_write_one", 00:31:04.528 "zoned": false, 00:31:04.528 "supported_io_types": { 00:31:04.528 "read": true, 00:31:04.528 "write": true, 00:31:04.528 "unmap": true, 00:31:04.528 "flush": true, 00:31:04.528 "reset": true, 00:31:04.528 "nvme_admin": true, 00:31:04.528 "nvme_io": true, 00:31:04.528 "nvme_io_md": false, 00:31:04.528 "write_zeroes": true, 00:31:04.528 "zcopy": false, 00:31:04.528 "get_zone_info": false, 00:31:04.528 "zone_management": false, 00:31:04.528 "zone_append": false, 00:31:04.528 "compare": true, 00:31:04.528 "compare_and_write": false, 00:31:04.528 "abort": true, 00:31:04.528 "seek_hole": false, 00:31:04.528 "seek_data": false, 00:31:04.528 "copy": true, 00:31:04.528 "nvme_iov_md": false 00:31:04.528 }, 00:31:04.528 "driver_specific": { 00:31:04.528 "nvme": [ 00:31:04.528 { 00:31:04.528 "pci_address": "0000:00:11.0", 00:31:04.528 "trid": { 00:31:04.528 "trtype": "PCIe", 00:31:04.528 "traddr": "0000:00:11.0" 00:31:04.528 }, 00:31:04.528 "ctrlr_data": { 00:31:04.528 "cntlid": 0, 00:31:04.528 "vendor_id": "0x1b36", 00:31:04.528 "model_number": "QEMU NVMe Ctrl", 00:31:04.528 "serial_number": "12341", 00:31:04.528 "firmware_revision": "8.0.0", 00:31:04.528 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:04.528 "oacs": { 00:31:04.528 "security": 0, 00:31:04.528 "format": 1, 00:31:04.528 "firmware": 0, 00:31:04.528 "ns_manage": 1 00:31:04.528 }, 00:31:04.528 "multi_ctrlr": false, 00:31:04.528 "ana_reporting": false 00:31:04.528 }, 00:31:04.528 "vs": { 00:31:04.528 "nvme_version": "1.4" 00:31:04.528 }, 00:31:04.528 "ns_data": { 00:31:04.528 "id": 1, 00:31:04.528 "can_share": false 00:31:04.528 } 00:31:04.528 } 00:31:04.528 ], 00:31:04.528 "mp_policy": "active_passive" 00:31:04.528 } 00:31:04.528 } 00:31:04.528 ]' 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:04.528 04:53:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:31:04.528 04:53:11 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:31:04.528 04:53:11 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:04.528 04:53:11 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:31:04.528 04:53:11 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:04.528 04:53:11 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:04.789 04:53:11 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b69b9a88-5364-42b9-bf4b-d63e0568c16b 00:31:04.789 04:53:11 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:31:04.789 04:53:11 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b69b9a88-5364-42b9-bf4b-d63e0568c16b 00:31:05.051 04:53:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:05.312 04:53:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=59f2548c-f5a4-4cf8-9bca-a458bff5c981 00:31:05.312 04:53:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 59f2548c-f5a4-4cf8-9bca-a458bff5c981 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:31:05.574 04:53:12 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7708dfd-d194-444c-89b3-e38e1409d090 00:31:05.574 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:05.574 { 00:31:05.574 "name": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:05.574 "aliases": [ 00:31:05.574 "lvs/nvme0n1p0" 00:31:05.574 ], 00:31:05.574 "product_name": "Logical Volume", 00:31:05.574 "block_size": 4096, 00:31:05.574 "num_blocks": 26476544, 00:31:05.574 "uuid": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:05.574 "assigned_rate_limits": { 00:31:05.574 "rw_ios_per_sec": 0, 00:31:05.574 "rw_mbytes_per_sec": 0, 00:31:05.574 "r_mbytes_per_sec": 0, 00:31:05.574 "w_mbytes_per_sec": 0 00:31:05.574 }, 00:31:05.574 "claimed": false, 00:31:05.574 "zoned": false, 00:31:05.574 "supported_io_types": { 00:31:05.574 "read": true, 00:31:05.574 "write": true, 00:31:05.574 "unmap": true, 00:31:05.574 "flush": false, 00:31:05.574 "reset": true, 00:31:05.574 "nvme_admin": false, 00:31:05.574 "nvme_io": false, 00:31:05.574 "nvme_io_md": false, 00:31:05.574 "write_zeroes": true, 00:31:05.574 "zcopy": false, 00:31:05.574 "get_zone_info": false, 00:31:05.574 "zone_management": false, 00:31:05.574 "zone_append": false, 00:31:05.574 "compare": false, 00:31:05.574 "compare_and_write": false, 00:31:05.574 "abort": false, 00:31:05.574 "seek_hole": true, 00:31:05.574 "seek_data": true, 00:31:05.574 "copy": false, 00:31:05.574 "nvme_iov_md": false 00:31:05.574 }, 00:31:05.574 "driver_specific": { 00:31:05.574 "lvol": { 00:31:05.574 "lvol_store_uuid": "59f2548c-f5a4-4cf8-9bca-a458bff5c981", 00:31:05.574 "base_bdev": "nvme0n1", 00:31:05.574 "thin_provision": true, 00:31:05.575 "num_allocated_clusters": 0, 00:31:05.575 "snapshot": false, 00:31:05.575 "clone": false, 00:31:05.575 "esnap_clone": false 00:31:05.575 } 00:31:05.575 } 00:31:05.575 } 00:31:05.575 ]' 00:31:05.575 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:05.575 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:05.575 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:05.834 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:05.834 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:05.834 04:53:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:05.834 04:53:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:31:05.834 04:53:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:31:05.834 04:53:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:06.093 04:53:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:06.093 04:53:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:06.093 04:53:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:06.093 { 00:31:06.093 "name": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:06.093 "aliases": [ 00:31:06.093 "lvs/nvme0n1p0" 00:31:06.093 ], 00:31:06.093 "product_name": "Logical Volume", 00:31:06.093 "block_size": 4096, 00:31:06.093 "num_blocks": 26476544, 00:31:06.093 "uuid": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:06.093 "assigned_rate_limits": { 00:31:06.093 "rw_ios_per_sec": 0, 00:31:06.093 "rw_mbytes_per_sec": 0, 00:31:06.093 "r_mbytes_per_sec": 0, 00:31:06.093 "w_mbytes_per_sec": 0 00:31:06.093 }, 00:31:06.093 "claimed": false, 00:31:06.093 "zoned": false, 00:31:06.093 "supported_io_types": { 00:31:06.093 "read": true, 00:31:06.093 "write": true, 00:31:06.093 "unmap": true, 00:31:06.093 "flush": false, 00:31:06.093 "reset": true, 00:31:06.093 "nvme_admin": false, 00:31:06.093 "nvme_io": false, 00:31:06.093 "nvme_io_md": false, 00:31:06.093 "write_zeroes": true, 00:31:06.093 "zcopy": false, 00:31:06.093 "get_zone_info": false, 00:31:06.093 "zone_management": false, 00:31:06.093 "zone_append": false, 00:31:06.093 "compare": false, 00:31:06.093 "compare_and_write": false, 00:31:06.093 "abort": false, 00:31:06.093 "seek_hole": true, 00:31:06.093 "seek_data": true, 00:31:06.093 "copy": false, 00:31:06.093 "nvme_iov_md": false 00:31:06.093 }, 00:31:06.093 "driver_specific": { 00:31:06.093 "lvol": { 00:31:06.093 "lvol_store_uuid": "59f2548c-f5a4-4cf8-9bca-a458bff5c981", 00:31:06.093 "base_bdev": "nvme0n1", 00:31:06.093 "thin_provision": true, 00:31:06.093 "num_allocated_clusters": 0, 00:31:06.093 "snapshot": false, 00:31:06.093 "clone": false, 00:31:06.093 "esnap_clone": false 00:31:06.093 } 00:31:06.093 } 00:31:06.093 } 00:31:06.093 ]' 00:31:06.093 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:06.351 04:53:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:31:06.351 04:53:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:06.351 04:53:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:31:06.351 04:53:13 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:31:06.351 04:53:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:06.351 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7708dfd-d194-444c-89b3-e38e1409d090 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:06.609 { 00:31:06.609 "name": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:06.609 "aliases": [ 00:31:06.609 "lvs/nvme0n1p0" 00:31:06.609 ], 00:31:06.609 "product_name": "Logical Volume", 00:31:06.609 "block_size": 4096, 00:31:06.609 "num_blocks": 26476544, 00:31:06.609 "uuid": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:06.609 "assigned_rate_limits": { 00:31:06.609 "rw_ios_per_sec": 0, 00:31:06.609 "rw_mbytes_per_sec": 0, 00:31:06.609 "r_mbytes_per_sec": 0, 00:31:06.609 "w_mbytes_per_sec": 0 00:31:06.609 }, 00:31:06.609 "claimed": false, 00:31:06.609 "zoned": false, 00:31:06.609 "supported_io_types": { 00:31:06.609 "read": true, 00:31:06.609 "write": true, 00:31:06.609 "unmap": true, 00:31:06.609 "flush": false, 00:31:06.609 "reset": true, 00:31:06.609 "nvme_admin": false, 00:31:06.609 "nvme_io": false, 00:31:06.609 "nvme_io_md": false, 00:31:06.609 "write_zeroes": true, 00:31:06.609 "zcopy": false, 00:31:06.609 "get_zone_info": false, 00:31:06.609 "zone_management": false, 00:31:06.609 "zone_append": false, 00:31:06.609 "compare": false, 00:31:06.609 "compare_and_write": false, 00:31:06.609 "abort": false, 00:31:06.609 "seek_hole": true, 00:31:06.609 "seek_data": true, 00:31:06.609 "copy": false, 00:31:06.609 "nvme_iov_md": false 00:31:06.609 }, 00:31:06.609 "driver_specific": { 00:31:06.609 "lvol": { 00:31:06.609 "lvol_store_uuid": "59f2548c-f5a4-4cf8-9bca-a458bff5c981", 00:31:06.609 "base_bdev": "nvme0n1", 00:31:06.609 "thin_provision": true, 00:31:06.609 "num_allocated_clusters": 0, 00:31:06.609 "snapshot": false, 00:31:06.609 "clone": false, 00:31:06.609 "esnap_clone": false 00:31:06.609 } 00:31:06.609 } 00:31:06.609 } 00:31:06.609 ]' 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:06.609 04:53:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:06.868 04:53:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:31:06.868 04:53:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c7708dfd-d194-444c-89b3-e38e1409d090 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:31:06.868 [2024-11-27 04:53:13.996291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.868 [2024-11-27 04:53:13.996328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:06.868 [2024-11-27 04:53:13.996342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:06.868 [2024-11-27 04:53:13.996349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.868 [2024-11-27 04:53:13.998554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.868 [2024-11-27 04:53:13.998583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:06.868 [2024-11-27 04:53:13.998592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:31:06.868 [2024-11-27 04:53:13.998598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.868 [2024-11-27 04:53:13.998676] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:06.869 [2024-11-27 04:53:13.999226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:06.869 [2024-11-27 04:53:13.999279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:13.999286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:06.869 [2024-11-27 04:53:13.999294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:31:06.869 [2024-11-27 04:53:13.999300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:13.999442] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:06.869 [2024-11-27 04:53:14.000441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.000469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:06.869 [2024-11-27 04:53:14.000477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:06.869 [2024-11-27 04:53:14.000485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.005628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.005653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:06.869 [2024-11-27 04:53:14.005661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.058 ms 00:31:06.869 [2024-11-27 04:53:14.005670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.005763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.005773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:06.869 [2024-11-27 04:53:14.005779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:06.869 [2024-11-27 04:53:14.005788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.005822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.005830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:06.869 [2024-11-27 04:53:14.005835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:06.869 [2024-11-27 04:53:14.005844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.005875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:06.869 [2024-11-27 04:53:14.008805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.008827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:06.869 [2024-11-27 04:53:14.008838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:31:06.869 [2024-11-27 04:53:14.008843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.008888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.008906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:06.869 [2024-11-27 04:53:14.008914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:06.869 [2024-11-27 04:53:14.008919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.008948] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:06.869 [2024-11-27 04:53:14.009051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:06.869 [2024-11-27 04:53:14.009063] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:06.869 [2024-11-27 04:53:14.009085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:06.869 [2024-11-27 04:53:14.009095] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009101] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009108] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:06.869 [2024-11-27 04:53:14.009114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:06.869 [2024-11-27 04:53:14.009122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:06.869 [2024-11-27 04:53:14.009128] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:06.869 [2024-11-27 04:53:14.009136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.009142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:06.869 [2024-11-27 04:53:14.009149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:31:06.869 [2024-11-27 04:53:14.009155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.009242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.869 [2024-11-27 04:53:14.009248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:06.869 [2024-11-27 04:53:14.009256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:06.869 [2024-11-27 04:53:14.009261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.869 [2024-11-27 04:53:14.009377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:06.869 [2024-11-27 04:53:14.009385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:06.869 [2024-11-27 04:53:14.009392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:06.869 [2024-11-27 04:53:14.009410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:06.869 [2024-11-27 04:53:14.009428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.869 [2024-11-27 04:53:14.009439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:06.869 [2024-11-27 04:53:14.009444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:06.869 [2024-11-27 04:53:14.009451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.869 [2024-11-27 04:53:14.009455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:06.869 [2024-11-27 04:53:14.009462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:06.869 [2024-11-27 04:53:14.009468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:06.869 [2024-11-27 04:53:14.009481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:06.869 [2024-11-27 04:53:14.009499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:06.869 [2024-11-27 04:53:14.009515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:06.869 [2024-11-27 04:53:14.009532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:06.869 [2024-11-27 04:53:14.009547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:06.869 [2024-11-27 04:53:14.009566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.869 [2024-11-27 04:53:14.009578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:06.869 [2024-11-27 04:53:14.009583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:06.869 [2024-11-27 04:53:14.009589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.869 [2024-11-27 04:53:14.009593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:06.869 [2024-11-27 04:53:14.009599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:06.869 [2024-11-27 04:53:14.009604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:06.869 [2024-11-27 04:53:14.009615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:06.869 [2024-11-27 04:53:14.009622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009626] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:06.869 [2024-11-27 04:53:14.009633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:06.869 [2024-11-27 04:53:14.009638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.869 [2024-11-27 04:53:14.009651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:06.869 [2024-11-27 04:53:14.009660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:06.869 [2024-11-27 04:53:14.009664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:06.869 [2024-11-27 04:53:14.009671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:06.869 [2024-11-27 04:53:14.009675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:06.869 [2024-11-27 04:53:14.009682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:06.869 [2024-11-27 04:53:14.009689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:06.870 [2024-11-27 04:53:14.009697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:06.870 [2024-11-27 04:53:14.009713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:06.870 [2024-11-27 04:53:14.009718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:06.870 [2024-11-27 04:53:14.009725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:06.870 [2024-11-27 04:53:14.009730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:06.870 [2024-11-27 04:53:14.009736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:06.870 [2024-11-27 04:53:14.009741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:06.870 [2024-11-27 04:53:14.009748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:06.870 [2024-11-27 04:53:14.009753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:06.870 [2024-11-27 04:53:14.009760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:06.870 [2024-11-27 04:53:14.009790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:06.870 [2024-11-27 04:53:14.009798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.870 [2024-11-27 04:53:14.009810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:06.870 [2024-11-27 04:53:14.009816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:06.870 [2024-11-27 04:53:14.009822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:06.870 [2024-11-27 04:53:14.009827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.870 [2024-11-27 04:53:14.009834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:06.870 [2024-11-27 04:53:14.009839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:31:06.870 [2024-11-27 04:53:14.009846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.870 [2024-11-27 04:53:14.009931] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:06.870 [2024-11-27 04:53:14.009942] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:09.400 [2024-11-27 04:53:16.310026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.310104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:09.400 [2024-11-27 04:53:16.310119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2300.084 ms 00:31:09.400 [2024-11-27 04:53:16.310130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.335802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.335968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.400 [2024-11-27 04:53:16.335986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.429 ms 00:31:09.400 [2024-11-27 04:53:16.335997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.336152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.336166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:09.400 [2024-11-27 04:53:16.336189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:09.400 [2024-11-27 04:53:16.336202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.376847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.376994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.400 [2024-11-27 04:53:16.377079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.612 ms 00:31:09.400 [2024-11-27 04:53:16.377110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.377207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.377237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.400 [2024-11-27 04:53:16.377258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:09.400 [2024-11-27 04:53:16.377279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.377693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.377799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.400 [2024-11-27 04:53:16.377869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:31:09.400 [2024-11-27 04:53:16.377895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.378022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.378093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.400 [2024-11-27 04:53:16.378166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:09.400 [2024-11-27 04:53:16.378193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.392970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.393109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.400 [2024-11-27 04:53:16.393167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.734 ms 00:31:09.400 [2024-11-27 04:53:16.393192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.404599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:09.400 [2024-11-27 04:53:16.419334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.419445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:09.400 [2024-11-27 04:53:16.419499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.025 ms 00:31:09.400 [2024-11-27 04:53:16.419522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.482173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.482305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:09.400 [2024-11-27 04:53:16.482390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.556 ms 00:31:09.400 [2024-11-27 04:53:16.482415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.482641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.482672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:09.400 [2024-11-27 04:53:16.482732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:31:09.400 [2024-11-27 04:53:16.482756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.506142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.506249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:09.400 [2024-11-27 04:53:16.506300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.340 ms 00:31:09.400 [2024-11-27 04:53:16.506325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.528672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.528775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:09.400 [2024-11-27 04:53:16.528838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.274 ms 00:31:09.400 [2024-11-27 04:53:16.528858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.529497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.529586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:09.400 [2024-11-27 04:53:16.529637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:31:09.400 [2024-11-27 04:53:16.529659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.400 [2024-11-27 04:53:16.593992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.400 [2024-11-27 04:53:16.594132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:09.400 [2024-11-27 04:53:16.594191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.263 ms 00:31:09.400 [2024-11-27 04:53:16.594213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.617918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.659 [2024-11-27 04:53:16.618029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:09.659 [2024-11-27 04:53:16.618116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.594 ms 00:31:09.659 [2024-11-27 04:53:16.618141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.640851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.659 [2024-11-27 04:53:16.640957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:09.659 [2024-11-27 04:53:16.641007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.637 ms 00:31:09.659 [2024-11-27 04:53:16.641029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.664200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.659 [2024-11-27 04:53:16.664322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:09.659 [2024-11-27 04:53:16.664374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:31:09.659 [2024-11-27 04:53:16.664396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.664493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.659 [2024-11-27 04:53:16.664522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:09.659 [2024-11-27 04:53:16.664547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:09.659 [2024-11-27 04:53:16.664565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.664662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.659 [2024-11-27 04:53:16.664742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:09.659 [2024-11-27 04:53:16.664763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:09.659 [2024-11-27 04:53:16.664782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.659 [2024-11-27 04:53:16.665613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:09.659 [2024-11-27 04:53:16.668678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2669.037 ms, result 0 00:31:09.659 [2024-11-27 04:53:16.669566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:09.659 { 00:31:09.659 "name": "ftl0", 00:31:09.659 "uuid": "507cd504-226e-4c6b-9d3c-7332f33276de" 00:31:09.659 } 00:31:09.659 04:53:16 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:09.659 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:09.918 04:53:16 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:31:09.918 [ 00:31:09.918 { 00:31:09.918 "name": "ftl0", 00:31:09.918 "aliases": [ 00:31:09.918 "507cd504-226e-4c6b-9d3c-7332f33276de" 00:31:09.918 ], 00:31:09.918 "product_name": "FTL disk", 00:31:09.918 "block_size": 4096, 00:31:09.918 "num_blocks": 23592960, 00:31:09.918 "uuid": "507cd504-226e-4c6b-9d3c-7332f33276de", 00:31:09.918 "assigned_rate_limits": { 00:31:09.918 "rw_ios_per_sec": 0, 00:31:09.918 "rw_mbytes_per_sec": 0, 00:31:09.918 "r_mbytes_per_sec": 0, 00:31:09.918 "w_mbytes_per_sec": 0 00:31:09.918 }, 00:31:09.918 "claimed": false, 00:31:09.918 "zoned": false, 00:31:09.918 "supported_io_types": { 00:31:09.918 "read": true, 00:31:09.918 "write": true, 00:31:09.918 "unmap": true, 00:31:09.918 "flush": true, 00:31:09.918 "reset": false, 00:31:09.918 "nvme_admin": false, 00:31:09.918 "nvme_io": false, 00:31:09.918 "nvme_io_md": false, 00:31:09.918 "write_zeroes": true, 00:31:09.918 "zcopy": false, 00:31:09.918 "get_zone_info": false, 00:31:09.918 "zone_management": false, 00:31:09.918 "zone_append": false, 00:31:09.918 "compare": false, 00:31:09.918 "compare_and_write": false, 00:31:09.918 "abort": false, 00:31:09.918 "seek_hole": false, 00:31:09.918 "seek_data": false, 00:31:09.918 "copy": false, 00:31:09.918 "nvme_iov_md": false 00:31:09.918 }, 00:31:09.918 "driver_specific": { 00:31:09.918 "ftl": { 00:31:09.918 "base_bdev": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:09.918 "cache": "nvc0n1p0" 00:31:09.918 } 00:31:09.918 } 00:31:09.918 } 00:31:09.918 ] 00:31:09.918 04:53:17 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:31:09.918 04:53:17 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:31:09.918 04:53:17 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:10.176 04:53:17 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:31:10.176 04:53:17 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:31:10.435 04:53:17 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:31:10.435 { 00:31:10.435 "name": "ftl0", 00:31:10.435 "aliases": [ 00:31:10.435 "507cd504-226e-4c6b-9d3c-7332f33276de" 00:31:10.435 ], 00:31:10.435 "product_name": "FTL disk", 00:31:10.435 "block_size": 4096, 00:31:10.435 "num_blocks": 23592960, 00:31:10.435 "uuid": "507cd504-226e-4c6b-9d3c-7332f33276de", 00:31:10.435 "assigned_rate_limits": { 00:31:10.435 "rw_ios_per_sec": 0, 00:31:10.435 "rw_mbytes_per_sec": 0, 00:31:10.435 "r_mbytes_per_sec": 0, 00:31:10.435 "w_mbytes_per_sec": 0 00:31:10.435 }, 00:31:10.435 "claimed": false, 00:31:10.435 "zoned": false, 00:31:10.435 "supported_io_types": { 00:31:10.435 "read": true, 00:31:10.435 "write": true, 00:31:10.435 "unmap": true, 00:31:10.435 "flush": true, 00:31:10.435 "reset": false, 00:31:10.435 "nvme_admin": false, 00:31:10.435 "nvme_io": false, 00:31:10.435 "nvme_io_md": false, 00:31:10.435 "write_zeroes": true, 00:31:10.435 "zcopy": false, 00:31:10.435 "get_zone_info": false, 00:31:10.435 "zone_management": false, 00:31:10.435 "zone_append": false, 00:31:10.435 "compare": false, 00:31:10.435 "compare_and_write": false, 00:31:10.435 "abort": false, 00:31:10.435 "seek_hole": false, 00:31:10.435 "seek_data": false, 00:31:10.435 "copy": false, 00:31:10.435 "nvme_iov_md": false 00:31:10.435 }, 00:31:10.435 "driver_specific": { 00:31:10.435 "ftl": { 00:31:10.435 "base_bdev": "c7708dfd-d194-444c-89b3-e38e1409d090", 00:31:10.435 "cache": "nvc0n1p0" 00:31:10.435 } 00:31:10.435 } 00:31:10.435 } 00:31:10.435 ]' 00:31:10.435 04:53:17 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:31:10.435 04:53:17 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:31:10.435 04:53:17 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:10.694 [2024-11-27 04:53:17.721771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.721941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:10.694 [2024-11-27 04:53:17.722010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:10.694 [2024-11-27 04:53:17.722037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.722109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:10.694 [2024-11-27 04:53:17.724945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.725072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:10.694 [2024-11-27 04:53:17.725138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:31:10.694 [2024-11-27 04:53:17.725161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.725787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.725807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:10.694 [2024-11-27 04:53:17.725820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:31:10.694 [2024-11-27 04:53:17.725828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.729491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.729514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:10.694 [2024-11-27 04:53:17.729527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:31:10.694 [2024-11-27 04:53:17.729536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.736490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.736517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:10.694 [2024-11-27 04:53:17.736529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:31:10.694 [2024-11-27 04:53:17.736536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.760987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.761021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:10.694 [2024-11-27 04:53:17.761038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.351 ms 00:31:10.694 [2024-11-27 04:53:17.761046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.776467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.776501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:10.694 [2024-11-27 04:53:17.776518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.346 ms 00:31:10.694 [2024-11-27 04:53:17.776525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.776742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.776753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:10.694 [2024-11-27 04:53:17.776763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:31:10.694 [2024-11-27 04:53:17.776770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.800225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.800257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:10.694 [2024-11-27 04:53:17.800270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.419 ms 00:31:10.694 [2024-11-27 04:53:17.800277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.823565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.823682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:10.694 [2024-11-27 04:53:17.823703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.228 ms 00:31:10.694 [2024-11-27 04:53:17.823710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.846100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.846215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:10.694 [2024-11-27 04:53:17.846233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.330 ms 00:31:10.694 [2024-11-27 04:53:17.846240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.868614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.694 [2024-11-27 04:53:17.868646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:10.694 [2024-11-27 04:53:17.868659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.227 ms 00:31:10.694 [2024-11-27 04:53:17.868665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.694 [2024-11-27 04:53:17.868732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.694 [2024-11-27 04:53:17.868748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.868993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.694 [2024-11-27 04:53:17.869685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.694 [2024-11-27 04:53:17.869697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:10.694 [2024-11-27 04:53:17.869705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:10.694 [2024-11-27 04:53:17.869714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:10.694 [2024-11-27 04:53:17.869723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:10.695 [2024-11-27 04:53:17.869732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:10.695 [2024-11-27 04:53:17.869739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.695 [2024-11-27 04:53:17.869748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.695 [2024-11-27 04:53:17.869756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.695 [2024-11-27 04:53:17.869763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.695 [2024-11-27 04:53:17.869769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.695 [2024-11-27 04:53:17.869778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.695 [2024-11-27 04:53:17.869785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.695 [2024-11-27 04:53:17.869795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:31:10.695 [2024-11-27 04:53:17.869802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.695 [2024-11-27 04:53:17.882589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.695 [2024-11-27 04:53:17.882618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.695 [2024-11-27 04:53:17.882633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.751 ms 00:31:10.695 [2024-11-27 04:53:17.882640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.695 [2024-11-27 04:53:17.883039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.695 [2024-11-27 04:53:17.883060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.695 [2024-11-27 04:53:17.883085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:31:10.695 [2024-11-27 04:53:17.883093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:17.929426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:17.929464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.953 [2024-11-27 04:53:17.929478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:17.929487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:17.929605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:17.929614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.953 [2024-11-27 04:53:17.929625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:17.929633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:17.929696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:17.929708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.953 [2024-11-27 04:53:17.929720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:17.929727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:17.929758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:17.929767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.953 [2024-11-27 04:53:17.929778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:17.929785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.015505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.015556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.953 [2024-11-27 04:53:18.015571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.015579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.081771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.081821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.953 [2024-11-27 04:53:18.081835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.081843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.081964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.081975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.953 [2024-11-27 04:53:18.081991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.081998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.082085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.953 [2024-11-27 04:53:18.082097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.082105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.082235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.953 [2024-11-27 04:53:18.082245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.082255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.082323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.953 [2024-11-27 04:53:18.082333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.082341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.082404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.953 [2024-11-27 04:53:18.082416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.082425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.953 [2024-11-27 04:53:18.082492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.953 [2024-11-27 04:53:18.082502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.953 [2024-11-27 04:53:18.082509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.953 [2024-11-27 04:53:18.082720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.930 ms, result 0 00:31:10.953 true 00:31:10.953 04:53:18 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76416 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76416 ']' 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76416 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76416 00:31:10.953 killing process with pid 76416 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76416' 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76416 00:31:10.953 04:53:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76416 00:31:17.621 04:53:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:31:18.193 65536+0 records in 00:31:18.193 65536+0 records out 00:31:18.193 268435456 bytes (268 MB, 256 MiB) copied, 1.1075 s, 242 MB/s 00:31:18.193 04:53:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:18.193 [2024-11-27 04:53:25.388852] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:31:18.193 [2024-11-27 04:53:25.389240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76598 ] 00:31:18.455 [2024-11-27 04:53:25.553540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.717 [2024-11-27 04:53:25.703529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.977 [2024-11-27 04:53:26.045204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.978 [2024-11-27 04:53:26.045308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.240 [2024-11-27 04:53:26.213047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.213133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:19.240 [2024-11-27 04:53:26.213151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:19.240 [2024-11-27 04:53:26.213162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.216897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.216966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.240 [2024-11-27 04:53:26.216980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:31:19.240 [2024-11-27 04:53:26.216991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.217175] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:19.240 [2024-11-27 04:53:26.218010] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:19.240 [2024-11-27 04:53:26.218050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.218062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.240 [2024-11-27 04:53:26.218088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:31:19.240 [2024-11-27 04:53:26.218097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.220532] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:19.240 [2024-11-27 04:53:26.236239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.236296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:19.240 [2024-11-27 04:53:26.236313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.709 ms 00:31:19.240 [2024-11-27 04:53:26.236321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.236458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.236472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:19.240 [2024-11-27 04:53:26.236483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:19.240 [2024-11-27 04:53:26.236492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.248334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.248382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.240 [2024-11-27 04:53:26.248395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.793 ms 00:31:19.240 [2024-11-27 04:53:26.248403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.248542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.248554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.240 [2024-11-27 04:53:26.248564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:19.240 [2024-11-27 04:53:26.248573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.248604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.248613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:19.240 [2024-11-27 04:53:26.248622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:19.240 [2024-11-27 04:53:26.248631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.248653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:19.240 [2024-11-27 04:53:26.253221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.240 [2024-11-27 04:53:26.253269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.240 [2024-11-27 04:53:26.253280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:31:19.240 [2024-11-27 04:53:26.253288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.240 [2024-11-27 04:53:26.253385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.241 [2024-11-27 04:53:26.253396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:19.241 [2024-11-27 04:53:26.253407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:19.241 [2024-11-27 04:53:26.253416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.241 [2024-11-27 04:53:26.253446] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:19.241 [2024-11-27 04:53:26.253473] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:19.241 [2024-11-27 04:53:26.253515] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:19.241 [2024-11-27 04:53:26.253533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:19.241 [2024-11-27 04:53:26.253645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:19.241 [2024-11-27 04:53:26.253657] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:19.241 [2024-11-27 04:53:26.253671] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:19.241 [2024-11-27 04:53:26.253686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:19.241 [2024-11-27 04:53:26.253696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:19.241 [2024-11-27 04:53:26.253705] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:19.241 [2024-11-27 04:53:26.253713] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:19.241 [2024-11-27 04:53:26.253722] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:19.241 [2024-11-27 04:53:26.253730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:19.241 [2024-11-27 04:53:26.253739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.241 [2024-11-27 04:53:26.253747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:19.241 [2024-11-27 04:53:26.253755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:31:19.241 [2024-11-27 04:53:26.253763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.241 [2024-11-27 04:53:26.253853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.241 [2024-11-27 04:53:26.253864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:19.241 [2024-11-27 04:53:26.253873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:19.241 [2024-11-27 04:53:26.253881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.241 [2024-11-27 04:53:26.253985] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:19.241 [2024-11-27 04:53:26.253996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:19.241 [2024-11-27 04:53:26.254005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:19.241 [2024-11-27 04:53:26.254028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:19.241 [2024-11-27 04:53:26.254050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.241 [2024-11-27 04:53:26.254093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:19.241 [2024-11-27 04:53:26.254111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:19.241 [2024-11-27 04:53:26.254119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.241 [2024-11-27 04:53:26.254127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:19.241 [2024-11-27 04:53:26.254134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:19.241 [2024-11-27 04:53:26.254141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:19.241 [2024-11-27 04:53:26.254159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:19.241 [2024-11-27 04:53:26.254181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:19.241 [2024-11-27 04:53:26.254203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:19.241 [2024-11-27 04:53:26.254224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:19.241 [2024-11-27 04:53:26.254245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:19.241 [2024-11-27 04:53:26.254266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.241 [2024-11-27 04:53:26.254280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:19.241 [2024-11-27 04:53:26.254287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:19.241 [2024-11-27 04:53:26.254293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.241 [2024-11-27 04:53:26.254301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:19.241 [2024-11-27 04:53:26.254309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:19.241 [2024-11-27 04:53:26.254315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:19.241 [2024-11-27 04:53:26.254329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:19.241 [2024-11-27 04:53:26.254336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254343] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:19.241 [2024-11-27 04:53:26.254351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:19.241 [2024-11-27 04:53:26.254363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.241 [2024-11-27 04:53:26.254379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:19.241 [2024-11-27 04:53:26.254392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:19.241 [2024-11-27 04:53:26.254399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:19.241 [2024-11-27 04:53:26.254422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:19.241 [2024-11-27 04:53:26.254430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:19.241 [2024-11-27 04:53:26.254438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:19.241 [2024-11-27 04:53:26.254448] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:19.241 [2024-11-27 04:53:26.254458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.241 [2024-11-27 04:53:26.254467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:19.242 [2024-11-27 04:53:26.254475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:19.242 [2024-11-27 04:53:26.254483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:19.242 [2024-11-27 04:53:26.254491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:19.242 [2024-11-27 04:53:26.254499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:19.242 [2024-11-27 04:53:26.254507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:19.242 [2024-11-27 04:53:26.254514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:19.242 [2024-11-27 04:53:26.254523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:19.242 [2024-11-27 04:53:26.254530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:19.242 [2024-11-27 04:53:26.254538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:19.242 [2024-11-27 04:53:26.254576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:19.242 [2024-11-27 04:53:26.254585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:19.242 [2024-11-27 04:53:26.254601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:19.242 [2024-11-27 04:53:26.254609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:19.242 [2024-11-27 04:53:26.254616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:19.242 [2024-11-27 04:53:26.254625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.254637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:19.242 [2024-11-27 04:53:26.254645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:31:19.242 [2024-11-27 04:53:26.254652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.293750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.293810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.242 [2024-11-27 04:53:26.293824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.038 ms 00:31:19.242 [2024-11-27 04:53:26.293832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.293985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.293997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:19.242 [2024-11-27 04:53:26.294007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:19.242 [2024-11-27 04:53:26.294016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.349835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.350133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.242 [2024-11-27 04:53:26.350165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.794 ms 00:31:19.242 [2024-11-27 04:53:26.350175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.350312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.350325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.242 [2024-11-27 04:53:26.350335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.242 [2024-11-27 04:53:26.350345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.351046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.351104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.242 [2024-11-27 04:53:26.351127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:31:19.242 [2024-11-27 04:53:26.351137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.351318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.351329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.242 [2024-11-27 04:53:26.351339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:31:19.242 [2024-11-27 04:53:26.351347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.370481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.370531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.242 [2024-11-27 04:53:26.370544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.108 ms 00:31:19.242 [2024-11-27 04:53:26.370553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.385834] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:19.242 [2024-11-27 04:53:26.386042] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:19.242 [2024-11-27 04:53:26.386085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.386096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:19.242 [2024-11-27 04:53:26.386108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.407 ms 00:31:19.242 [2024-11-27 04:53:26.386117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.412867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.413075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:19.242 [2024-11-27 04:53:26.413100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:31:19.242 [2024-11-27 04:53:26.413109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.242 [2024-11-27 04:53:26.426629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.242 [2024-11-27 04:53:26.426679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.242 [2024-11-27 04:53:26.426692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.357 ms 00:31:19.242 [2024-11-27 04:53:26.426700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.439761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.439807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.505 [2024-11-27 04:53:26.439821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.966 ms 00:31:19.505 [2024-11-27 04:53:26.439829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.440599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.440630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.505 [2024-11-27 04:53:26.440641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:31:19.505 [2024-11-27 04:53:26.440650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.516210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.516276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.505 [2024-11-27 04:53:26.516294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.528 ms 00:31:19.505 [2024-11-27 04:53:26.516305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.529234] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:19.505 [2024-11-27 04:53:26.554819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.554882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.505 [2024-11-27 04:53:26.554899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.401 ms 00:31:19.505 [2024-11-27 04:53:26.554908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.555033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.555047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.505 [2024-11-27 04:53:26.555057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:19.505 [2024-11-27 04:53:26.555098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.555171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.555182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.505 [2024-11-27 04:53:26.555216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:19.505 [2024-11-27 04:53:26.555226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.555268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.555282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.505 [2024-11-27 04:53:26.555295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:19.505 [2024-11-27 04:53:26.555303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.555345] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.505 [2024-11-27 04:53:26.555358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.555369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.505 [2024-11-27 04:53:26.555378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:19.505 [2024-11-27 04:53:26.555386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.583567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.583630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.505 [2024-11-27 04:53:26.583646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.155 ms 00:31:19.505 [2024-11-27 04:53:26.583657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.583814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.505 [2024-11-27 04:53:26.583828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.505 [2024-11-27 04:53:26.583839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:19.505 [2024-11-27 04:53:26.583848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.505 [2024-11-27 04:53:26.585234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:19.505 [2024-11-27 04:53:26.588907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.749 ms, result 0 00:31:19.505 [2024-11-27 04:53:26.590048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:19.505 [2024-11-27 04:53:26.604033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:20.451  [2024-11-27T04:53:29.043Z] Copying: 10/256 [MB] (10 MBps) [2024-11-27T04:53:29.611Z] Copying: 21/256 [MB] (10 MBps) [2024-11-27T04:53:30.991Z] Copying: 44/256 [MB] (22 MBps) [2024-11-27T04:53:31.933Z] Copying: 78/256 [MB] (34 MBps) [2024-11-27T04:53:32.874Z] Copying: 92/256 [MB] (14 MBps) [2024-11-27T04:53:33.813Z] Copying: 106/256 [MB] (13 MBps) [2024-11-27T04:53:34.755Z] Copying: 116/256 [MB] (10 MBps) [2024-11-27T04:53:35.700Z] Copying: 127/256 [MB] (10 MBps) [2024-11-27T04:53:36.644Z] Copying: 139/256 [MB] (12 MBps) [2024-11-27T04:53:38.031Z] Copying: 151/256 [MB] (12 MBps) [2024-11-27T04:53:38.976Z] Copying: 169/256 [MB] (18 MBps) [2024-11-27T04:53:39.920Z] Copying: 184/256 [MB] (14 MBps) [2024-11-27T04:53:40.866Z] Copying: 194/256 [MB] (10 MBps) [2024-11-27T04:53:41.812Z] Copying: 207/256 [MB] (13 MBps) [2024-11-27T04:53:42.754Z] Copying: 219/256 [MB] (11 MBps) [2024-11-27T04:53:43.696Z] Copying: 230/256 [MB] (11 MBps) [2024-11-27T04:53:44.640Z] Copying: 242/256 [MB] (12 MBps) [2024-11-27T04:53:44.903Z] Copying: 255/256 [MB] (12 MBps) [2024-11-27T04:53:44.903Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-27 04:53:44.664823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:37.700 [2024-11-27 04:53:44.672465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.672499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:37.700 [2024-11-27 04:53:44.672513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:37.700 [2024-11-27 04:53:44.672526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.672544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:37.700 [2024-11-27 04:53:44.674929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.675076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:37.700 [2024-11-27 04:53:44.675092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:31:37.700 [2024-11-27 04:53:44.675099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.677460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.677488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:37.700 [2024-11-27 04:53:44.677496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:31:37.700 [2024-11-27 04:53:44.677503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.683830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.683861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:37.700 [2024-11-27 04:53:44.683869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.313 ms 00:31:37.700 [2024-11-27 04:53:44.683876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.689159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.689181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:37.700 [2024-11-27 04:53:44.689190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.256 ms 00:31:37.700 [2024-11-27 04:53:44.689196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.708254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.708285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:37.700 [2024-11-27 04:53:44.708295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.013 ms 00:31:37.700 [2024-11-27 04:53:44.708301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.720606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.720636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:37.700 [2024-11-27 04:53:44.720648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.271 ms 00:31:37.700 [2024-11-27 04:53:44.720655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.720750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.720758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:37.700 [2024-11-27 04:53:44.720765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:37.700 [2024-11-27 04:53:44.720777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.739291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.739315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:37.700 [2024-11-27 04:53:44.739323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:31:37.700 [2024-11-27 04:53:44.739329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.757679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.757702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:37.700 [2024-11-27 04:53:44.757710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:31:37.700 [2024-11-27 04:53:44.757716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.775148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.775174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:37.700 [2024-11-27 04:53:44.775182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.406 ms 00:31:37.700 [2024-11-27 04:53:44.775187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.792421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.700 [2024-11-27 04:53:44.792444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:37.700 [2024-11-27 04:53:44.792451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:31:37.700 [2024-11-27 04:53:44.792457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.700 [2024-11-27 04:53:44.792482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:37.700 [2024-11-27 04:53:44.792495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:37.700 [2024-11-27 04:53:44.792594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.792997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:37.701 [2024-11-27 04:53:44.793093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:37.701 [2024-11-27 04:53:44.793099] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:37.701 [2024-11-27 04:53:44.793106] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:37.701 [2024-11-27 04:53:44.793112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:37.701 [2024-11-27 04:53:44.793118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:37.701 [2024-11-27 04:53:44.793123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:37.701 [2024-11-27 04:53:44.793129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:37.701 [2024-11-27 04:53:44.793135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:37.702 [2024-11-27 04:53:44.793141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:37.702 [2024-11-27 04:53:44.793146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:37.702 [2024-11-27 04:53:44.793151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:37.702 [2024-11-27 04:53:44.793156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.702 [2024-11-27 04:53:44.793165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:37.702 [2024-11-27 04:53:44.793171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:31:37.702 [2024-11-27 04:53:44.793177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.803584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.702 [2024-11-27 04:53:44.803608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:37.702 [2024-11-27 04:53:44.803616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.394 ms 00:31:37.702 [2024-11-27 04:53:44.803622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.803922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.702 [2024-11-27 04:53:44.803935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:37.702 [2024-11-27 04:53:44.803942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:31:37.702 [2024-11-27 04:53:44.803948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.833060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.702 [2024-11-27 04:53:44.833096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:37.702 [2024-11-27 04:53:44.833104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.702 [2024-11-27 04:53:44.833110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.833166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.702 [2024-11-27 04:53:44.833173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:37.702 [2024-11-27 04:53:44.833179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.702 [2024-11-27 04:53:44.833185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.833220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.702 [2024-11-27 04:53:44.833228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:37.702 [2024-11-27 04:53:44.833234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.702 [2024-11-27 04:53:44.833240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.833254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.702 [2024-11-27 04:53:44.833263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:37.702 [2024-11-27 04:53:44.833269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.702 [2024-11-27 04:53:44.833275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.702 [2024-11-27 04:53:44.895767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.702 [2024-11-27 04:53:44.895800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:37.702 [2024-11-27 04:53:44.895810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.702 [2024-11-27 04:53:44.895817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:37.963 [2024-11-27 04:53:44.947343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:37.963 [2024-11-27 04:53:44.947414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:37.963 [2024-11-27 04:53:44.947463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:37.963 [2024-11-27 04:53:44.947556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:37.963 [2024-11-27 04:53:44.947603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:37.963 [2024-11-27 04:53:44.947661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.963 [2024-11-27 04:53:44.947714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:37.963 [2024-11-27 04:53:44.947723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.963 [2024-11-27 04:53:44.947729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.963 [2024-11-27 04:53:44.947855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.368 ms, result 0 00:31:38.907 00:31:38.907 00:31:38.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.907 04:53:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76812 00:31:38.907 04:53:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76812 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76812 ']' 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.907 04:53:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.907 04:53:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:38.907 [2024-11-27 04:53:45.876672] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:31:38.907 [2024-11-27 04:53:45.876797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76812 ] 00:31:38.907 [2024-11-27 04:53:46.031724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.169 [2024-11-27 04:53:46.119468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.741 04:53:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.741 04:53:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:31:39.741 04:53:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:31:39.741 [2024-11-27 04:53:46.895300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:39.741 [2024-11-27 04:53:46.895359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:40.004 [2024-11-27 04:53:47.069273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.069326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:40.005 [2024-11-27 04:53:47.069354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:40.005 [2024-11-27 04:53:47.069363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.072359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.072403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.005 [2024-11-27 04:53:47.072416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:31:40.005 [2024-11-27 04:53:47.072424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.072531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:40.005 [2024-11-27 04:53:47.073288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:40.005 [2024-11-27 04:53:47.073320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.073353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.005 [2024-11-27 04:53:47.073366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:31:40.005 [2024-11-27 04:53:47.073377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.075334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:40.005 [2024-11-27 04:53:47.090036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.090098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:40.005 [2024-11-27 04:53:47.090113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.709 ms 00:31:40.005 [2024-11-27 04:53:47.090124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.090230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.090245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:40.005 [2024-11-27 04:53:47.090256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:40.005 [2024-11-27 04:53:47.090267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.101566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.101616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.005 [2024-11-27 04:53:47.101627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.243 ms 00:31:40.005 [2024-11-27 04:53:47.101638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.101776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.101791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.005 [2024-11-27 04:53:47.101800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:40.005 [2024-11-27 04:53:47.101817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.101847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.101860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:40.005 [2024-11-27 04:53:47.101868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:40.005 [2024-11-27 04:53:47.101878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.101902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:40.005 [2024-11-27 04:53:47.106478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.106517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.005 [2024-11-27 04:53:47.106530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.579 ms 00:31:40.005 [2024-11-27 04:53:47.106539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.106604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.106614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:40.005 [2024-11-27 04:53:47.106630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:40.005 [2024-11-27 04:53:47.106639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.106664] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:40.005 [2024-11-27 04:53:47.106690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:40.005 [2024-11-27 04:53:47.106741] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:40.005 [2024-11-27 04:53:47.106760] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:40.005 [2024-11-27 04:53:47.106875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:40.005 [2024-11-27 04:53:47.106888] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:40.005 [2024-11-27 04:53:47.106908] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:40.005 [2024-11-27 04:53:47.106920] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:40.005 [2024-11-27 04:53:47.106933] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:40.005 [2024-11-27 04:53:47.106945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:40.005 [2024-11-27 04:53:47.106955] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:40.005 [2024-11-27 04:53:47.106964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:40.005 [2024-11-27 04:53:47.106977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:40.005 [2024-11-27 04:53:47.106986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.106997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:40.005 [2024-11-27 04:53:47.107008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:31:40.005 [2024-11-27 04:53:47.107022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.107132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.005 [2024-11-27 04:53:47.107145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:40.005 [2024-11-27 04:53:47.107155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:31:40.005 [2024-11-27 04:53:47.107166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.005 [2024-11-27 04:53:47.107272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:40.005 [2024-11-27 04:53:47.107297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:40.005 [2024-11-27 04:53:47.107307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:40.005 [2024-11-27 04:53:47.107364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:40.005 [2024-11-27 04:53:47.107397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:40.005 [2024-11-27 04:53:47.107415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:40.005 [2024-11-27 04:53:47.107425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:40.005 [2024-11-27 04:53:47.107433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:40.005 [2024-11-27 04:53:47.107442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:40.005 [2024-11-27 04:53:47.107450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:40.005 [2024-11-27 04:53:47.107459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:40.005 [2024-11-27 04:53:47.107482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:40.005 [2024-11-27 04:53:47.107511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:40.005 [2024-11-27 04:53:47.107538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:40.005 [2024-11-27 04:53:47.107561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:40.005 [2024-11-27 04:53:47.107589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.005 [2024-11-27 04:53:47.107606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:40.005 [2024-11-27 04:53:47.107614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:40.005 [2024-11-27 04:53:47.107632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:40.005 [2024-11-27 04:53:47.107642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:40.005 [2024-11-27 04:53:47.107649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:40.005 [2024-11-27 04:53:47.107658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:40.005 [2024-11-27 04:53:47.107665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:40.005 [2024-11-27 04:53:47.107676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.005 [2024-11-27 04:53:47.107683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:40.006 [2024-11-27 04:53:47.107693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:40.006 [2024-11-27 04:53:47.107700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.006 [2024-11-27 04:53:47.107709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:40.006 [2024-11-27 04:53:47.107720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:40.006 [2024-11-27 04:53:47.107732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:40.006 [2024-11-27 04:53:47.107741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.006 [2024-11-27 04:53:47.107751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:40.006 [2024-11-27 04:53:47.107760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:40.006 [2024-11-27 04:53:47.107770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:40.006 [2024-11-27 04:53:47.107780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:40.006 [2024-11-27 04:53:47.107789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:40.006 [2024-11-27 04:53:47.107798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:40.006 [2024-11-27 04:53:47.107810] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:40.006 [2024-11-27 04:53:47.107820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:40.006 [2024-11-27 04:53:47.107845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:40.006 [2024-11-27 04:53:47.107856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:40.006 [2024-11-27 04:53:47.107864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:40.006 [2024-11-27 04:53:47.107875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:40.006 [2024-11-27 04:53:47.107884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:40.006 [2024-11-27 04:53:47.107896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:40.006 [2024-11-27 04:53:47.107906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:40.006 [2024-11-27 04:53:47.107916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:40.006 [2024-11-27 04:53:47.107925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:40.006 [2024-11-27 04:53:47.107972] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:40.006 [2024-11-27 04:53:47.107981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.107993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:40.006 [2024-11-27 04:53:47.108002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:40.006 [2024-11-27 04:53:47.108012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:40.006 [2024-11-27 04:53:47.108019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:40.006 [2024-11-27 04:53:47.108029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.108036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:40.006 [2024-11-27 04:53:47.108047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:31:40.006 [2024-11-27 04:53:47.108057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.146039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.146106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.006 [2024-11-27 04:53:47.146121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.894 ms 00:31:40.006 [2024-11-27 04:53:47.146133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.146278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.146290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:40.006 [2024-11-27 04:53:47.146303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:40.006 [2024-11-27 04:53:47.146311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.185970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.186020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.006 [2024-11-27 04:53:47.186036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.630 ms 00:31:40.006 [2024-11-27 04:53:47.186045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.186156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.186168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.006 [2024-11-27 04:53:47.186185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:40.006 [2024-11-27 04:53:47.186194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.186864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.186906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.006 [2024-11-27 04:53:47.186919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:31:40.006 [2024-11-27 04:53:47.186929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.006 [2024-11-27 04:53:47.187132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.006 [2024-11-27 04:53:47.187144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.006 [2024-11-27 04:53:47.187159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:31:40.006 [2024-11-27 04:53:47.187167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.208130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.208172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.268 [2024-11-27 04:53:47.208185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.932 ms 00:31:40.268 [2024-11-27 04:53:47.208194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.233645] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:40.268 [2024-11-27 04:53:47.233700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:40.268 [2024-11-27 04:53:47.233723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.233734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:40.268 [2024-11-27 04:53:47.233748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.406 ms 00:31:40.268 [2024-11-27 04:53:47.233766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.260516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.260564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:40.268 [2024-11-27 04:53:47.260580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:31:40.268 [2024-11-27 04:53:47.260590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.273562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.273605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:40.268 [2024-11-27 04:53:47.273624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.872 ms 00:31:40.268 [2024-11-27 04:53:47.273633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.286170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.286219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:40.268 [2024-11-27 04:53:47.286234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:31:40.268 [2024-11-27 04:53:47.286242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.286932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.286966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:40.268 [2024-11-27 04:53:47.286980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:31:40.268 [2024-11-27 04:53:47.286989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.360388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.360447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:40.268 [2024-11-27 04:53:47.360467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.369 ms 00:31:40.268 [2024-11-27 04:53:47.360476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.372642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:40.268 [2024-11-27 04:53:47.396883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.396946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:40.268 [2024-11-27 04:53:47.396960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.299 ms 00:31:40.268 [2024-11-27 04:53:47.396972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.397139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.397157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:40.268 [2024-11-27 04:53:47.397167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:40.268 [2024-11-27 04:53:47.397179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.397251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.397264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:40.268 [2024-11-27 04:53:47.397273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:40.268 [2024-11-27 04:53:47.397287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.397316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.397354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:40.268 [2024-11-27 04:53:47.397365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:40.268 [2024-11-27 04:53:47.397376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.397420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:40.268 [2024-11-27 04:53:47.397437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.397449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:40.268 [2024-11-27 04:53:47.397461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:40.268 [2024-11-27 04:53:47.397472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.424360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.424411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:40.268 [2024-11-27 04:53:47.424428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.857 ms 00:31:40.268 [2024-11-27 04:53:47.424438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.424560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-27 04:53:47.424573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:40.268 [2024-11-27 04:53:47.424592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:40.268 [2024-11-27 04:53:47.424602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-27 04:53:47.426003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:40.268 [2024-11-27 04:53:47.429413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.334 ms, result 0 00:31:40.268 [2024-11-27 04:53:47.431456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.268 Some configs were skipped because the RPC state that can call them passed over. 00:31:40.530 04:53:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:31:40.530 [2024-11-27 04:53:47.676084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.530 [2024-11-27 04:53:47.676154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:40.530 [2024-11-27 04:53:47.676167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:31:40.530 [2024-11-27 04:53:47.676180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.530 [2024-11-27 04:53:47.676217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.242 ms, result 0 00:31:40.530 true 00:31:40.530 04:53:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:31:40.792 [2024-11-27 04:53:47.892054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.792 [2024-11-27 04:53:47.892116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:40.792 [2024-11-27 04:53:47.892131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:31:40.792 [2024-11-27 04:53:47.892139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.792 [2024-11-27 04:53:47.892179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.939 ms, result 0 00:31:40.792 true 00:31:40.792 04:53:47 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76812 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76812 ']' 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76812 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76812 00:31:40.792 killing process with pid 76812 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76812' 00:31:40.792 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76812 00:31:40.793 04:53:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76812 00:31:41.739 [2024-11-27 04:53:48.782613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.782714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:41.739 [2024-11-27 04:53:48.782731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:41.739 [2024-11-27 04:53:48.782742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.782770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:41.739 [2024-11-27 04:53:48.786201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.786248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:41.739 [2024-11-27 04:53:48.786266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:31:41.739 [2024-11-27 04:53:48.786275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.786616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.786631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:41.739 [2024-11-27 04:53:48.786644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:31:41.739 [2024-11-27 04:53:48.786653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.791335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.791382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:41.739 [2024-11-27 04:53:48.791395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:31:41.739 [2024-11-27 04:53:48.791404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.798333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.798378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:41.739 [2024-11-27 04:53:48.798391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:31:41.739 [2024-11-27 04:53:48.798400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.809719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.809770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:41.739 [2024-11-27 04:53:48.809787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.228 ms 00:31:41.739 [2024-11-27 04:53:48.809795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.820052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.820109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:41.739 [2024-11-27 04:53:48.820124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.187 ms 00:31:41.739 [2024-11-27 04:53:48.820132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.820295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.820309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:41.739 [2024-11-27 04:53:48.820322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:41.739 [2024-11-27 04:53:48.820331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.831985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.832028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:41.739 [2024-11-27 04:53:48.832042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.628 ms 00:31:41.739 [2024-11-27 04:53:48.832050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.843158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.843199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:41.739 [2024-11-27 04:53:48.843215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.032 ms 00:31:41.739 [2024-11-27 04:53:48.843223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.853483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.739 [2024-11-27 04:53:48.853525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:41.739 [2024-11-27 04:53:48.853538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.190 ms 00:31:41.739 [2024-11-27 04:53:48.853545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.739 [2024-11-27 04:53:48.863709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.740 [2024-11-27 04:53:48.863751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:41.740 [2024-11-27 04:53:48.863764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.066 ms 00:31:41.740 [2024-11-27 04:53:48.863772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.740 [2024-11-27 04:53:48.863834] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:41.740 [2024-11-27 04:53:48.863851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.863994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:41.740 [2024-11-27 04:53:48.864697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:41.741 [2024-11-27 04:53:48.864855] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:41.741 [2024-11-27 04:53:48.864870] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:41.741 [2024-11-27 04:53:48.864882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:41.741 [2024-11-27 04:53:48.864895] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:41.741 [2024-11-27 04:53:48.864904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:41.741 [2024-11-27 04:53:48.864915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:41.741 [2024-11-27 04:53:48.864925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:41.741 [2024-11-27 04:53:48.864936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:41.741 [2024-11-27 04:53:48.864944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:41.741 [2024-11-27 04:53:48.864956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:41.741 [2024-11-27 04:53:48.864964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:41.741 [2024-11-27 04:53:48.864975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.741 [2024-11-27 04:53:48.864983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:41.741 [2024-11-27 04:53:48.864995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:31:41.741 [2024-11-27 04:53:48.865004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.879715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.741 [2024-11-27 04:53:48.879755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:41.741 [2024-11-27 04:53:48.879773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.672 ms 00:31:41.741 [2024-11-27 04:53:48.879782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.880279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.741 [2024-11-27 04:53:48.880308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:41.741 [2024-11-27 04:53:48.880324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:31:41.741 [2024-11-27 04:53:48.880333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.932933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.741 [2024-11-27 04:53:48.932979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:41.741 [2024-11-27 04:53:48.932994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.741 [2024-11-27 04:53:48.933003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.933114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.741 [2024-11-27 04:53:48.933126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:41.741 [2024-11-27 04:53:48.933141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.741 [2024-11-27 04:53:48.933150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.933208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.741 [2024-11-27 04:53:48.933222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:41.741 [2024-11-27 04:53:48.933237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.741 [2024-11-27 04:53:48.933246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.741 [2024-11-27 04:53:48.933268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.741 [2024-11-27 04:53:48.933276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:41.741 [2024-11-27 04:53:48.933288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.741 [2024-11-27 04:53:48.933299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.026162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.026284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:42.003 [2024-11-27 04:53:49.026301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.026311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.095990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:42.003 [2024-11-27 04:53:49.096060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:42.003 [2024-11-27 04:53:49.096188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:42.003 [2024-11-27 04:53:49.096248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:42.003 [2024-11-27 04:53:49.096384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:42.003 [2024-11-27 04:53:49.096447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:42.003 [2024-11-27 04:53:49.096527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:42.003 [2024-11-27 04:53:49.096607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:42.003 [2024-11-27 04:53:49.096619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:42.003 [2024-11-27 04:53:49.096626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:42.003 [2024-11-27 04:53:49.096791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 314.148 ms, result 0 00:31:42.602 04:53:49 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:31:42.602 04:53:49 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:42.602 [2024-11-27 04:53:49.739803] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:31:42.602 [2024-11-27 04:53:49.740096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:31:42.862 [2024-11-27 04:53:49.895862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.862 [2024-11-27 04:53:49.983911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.123 [2024-11-27 04:53:50.217407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.123 [2024-11-27 04:53:50.217464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.384 [2024-11-27 04:53:50.373873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.373908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:43.384 [2024-11-27 04:53:50.373920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.384 [2024-11-27 04:53:50.373927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.376140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.376163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.384 [2024-11-27 04:53:50.376171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:31:43.384 [2024-11-27 04:53:50.376178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.376241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:43.384 [2024-11-27 04:53:50.377038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:43.384 [2024-11-27 04:53:50.377083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.377092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.384 [2024-11-27 04:53:50.377100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:31:43.384 [2024-11-27 04:53:50.377106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.378438] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:43.384 [2024-11-27 04:53:50.388977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.389001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:43.384 [2024-11-27 04:53:50.389010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.541 ms 00:31:43.384 [2024-11-27 04:53:50.389016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.389103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.389112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:43.384 [2024-11-27 04:53:50.389119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:43.384 [2024-11-27 04:53:50.389126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.395306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.395326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.384 [2024-11-27 04:53:50.395333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:31:43.384 [2024-11-27 04:53:50.395339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.395413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.395421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.384 [2024-11-27 04:53:50.395428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:43.384 [2024-11-27 04:53:50.395434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.395454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.395461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:43.384 [2024-11-27 04:53:50.395468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:43.384 [2024-11-27 04:53:50.395474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.395495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:43.384 [2024-11-27 04:53:50.398479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.398498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.384 [2024-11-27 04:53:50.398505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:31:43.384 [2024-11-27 04:53:50.398511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.398542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.398549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:43.384 [2024-11-27 04:53:50.398555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:43.384 [2024-11-27 04:53:50.398561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.398578] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:43.384 [2024-11-27 04:53:50.398593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:43.384 [2024-11-27 04:53:50.398622] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:43.384 [2024-11-27 04:53:50.398636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:43.384 [2024-11-27 04:53:50.398721] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:43.384 [2024-11-27 04:53:50.398729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:43.384 [2024-11-27 04:53:50.398738] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:43.384 [2024-11-27 04:53:50.398748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:43.384 [2024-11-27 04:53:50.398755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:43.384 [2024-11-27 04:53:50.398761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:43.384 [2024-11-27 04:53:50.398767] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:43.384 [2024-11-27 04:53:50.398773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:43.384 [2024-11-27 04:53:50.398778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:43.384 [2024-11-27 04:53:50.398784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.398790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:43.384 [2024-11-27 04:53:50.398796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:31:43.384 [2024-11-27 04:53:50.398801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.398868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-11-27 04:53:50.398877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:43.384 [2024-11-27 04:53:50.398883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:43.384 [2024-11-27 04:53:50.398889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-11-27 04:53:50.398965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:43.384 [2024-11-27 04:53:50.398972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:43.384 [2024-11-27 04:53:50.398979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.384 [2024-11-27 04:53:50.398985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.384 [2024-11-27 04:53:50.398992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:43.384 [2024-11-27 04:53:50.398998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:43.384 [2024-11-27 04:53:50.399004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:43.384 [2024-11-27 04:53:50.399009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:43.384 [2024-11-27 04:53:50.399015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:43.384 [2024-11-27 04:53:50.399020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.384 [2024-11-27 04:53:50.399025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:43.385 [2024-11-27 04:53:50.399035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:43.385 [2024-11-27 04:53:50.399040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.385 [2024-11-27 04:53:50.399045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:43.385 [2024-11-27 04:53:50.399050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:43.385 [2024-11-27 04:53:50.399055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:43.385 [2024-11-27 04:53:50.399076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:43.385 [2024-11-27 04:53:50.399093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:43.385 [2024-11-27 04:53:50.399108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:43.385 [2024-11-27 04:53:50.399123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:43.385 [2024-11-27 04:53:50.399140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:43.385 [2024-11-27 04:53:50.399155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.385 [2024-11-27 04:53:50.399166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:43.385 [2024-11-27 04:53:50.399171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:43.385 [2024-11-27 04:53:50.399177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.385 [2024-11-27 04:53:50.399182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:43.385 [2024-11-27 04:53:50.399187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:43.385 [2024-11-27 04:53:50.399192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:43.385 [2024-11-27 04:53:50.399202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:43.385 [2024-11-27 04:53:50.399207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:43.385 [2024-11-27 04:53:50.399217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:43.385 [2024-11-27 04:53:50.399225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.385 [2024-11-27 04:53:50.399236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:43.385 [2024-11-27 04:53:50.399241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:43.385 [2024-11-27 04:53:50.399246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:43.385 [2024-11-27 04:53:50.399251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:43.385 [2024-11-27 04:53:50.399256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:43.385 [2024-11-27 04:53:50.399261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:43.385 [2024-11-27 04:53:50.399267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:43.385 [2024-11-27 04:53:50.399274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:43.385 [2024-11-27 04:53:50.399286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:43.385 [2024-11-27 04:53:50.399291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:43.385 [2024-11-27 04:53:50.399296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:43.385 [2024-11-27 04:53:50.399302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:43.385 [2024-11-27 04:53:50.399308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:43.385 [2024-11-27 04:53:50.399313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:43.385 [2024-11-27 04:53:50.399318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:43.385 [2024-11-27 04:53:50.399324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:43.385 [2024-11-27 04:53:50.399329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:43.385 [2024-11-27 04:53:50.399360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:43.385 [2024-11-27 04:53:50.399366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:43.385 [2024-11-27 04:53:50.399377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:43.385 [2024-11-27 04:53:50.399383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:43.385 [2024-11-27 04:53:50.399388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:43.385 [2024-11-27 04:53:50.399393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.399401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:43.385 [2024-11-27 04:53:50.399407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:31:43.385 [2024-11-27 04:53:50.399412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.423506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.423528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.385 [2024-11-27 04:53:50.423536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.034 ms 00:31:43.385 [2024-11-27 04:53:50.423542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.423640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.423648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:43.385 [2024-11-27 04:53:50.423656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:43.385 [2024-11-27 04:53:50.423661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.460759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.460787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.385 [2024-11-27 04:53:50.460798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.082 ms 00:31:43.385 [2024-11-27 04:53:50.460805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.460865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.460874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.385 [2024-11-27 04:53:50.460881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:43.385 [2024-11-27 04:53:50.460887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.461289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.461308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.385 [2024-11-27 04:53:50.461316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:31:43.385 [2024-11-27 04:53:50.461326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.461455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.461463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.385 [2024-11-27 04:53:50.461470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:43.385 [2024-11-27 04:53:50.461477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.473677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.473699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.385 [2024-11-27 04:53:50.473707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:31:43.385 [2024-11-27 04:53:50.473714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.385 [2024-11-27 04:53:50.484226] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:43.385 [2024-11-27 04:53:50.484250] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:43.385 [2024-11-27 04:53:50.484259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.385 [2024-11-27 04:53:50.484266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:43.386 [2024-11-27 04:53:50.484273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.453 ms 00:31:43.386 [2024-11-27 04:53:50.484280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.502959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.386 [2024-11-27 04:53:50.502983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:43.386 [2024-11-27 04:53:50.502993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.622 ms 00:31:43.386 [2024-11-27 04:53:50.503000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.512029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.386 [2024-11-27 04:53:50.512049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:43.386 [2024-11-27 04:53:50.512056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.975 ms 00:31:43.386 [2024-11-27 04:53:50.512062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.520999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.386 [2024-11-27 04:53:50.521020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:43.386 [2024-11-27 04:53:50.521027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.887 ms 00:31:43.386 [2024-11-27 04:53:50.521033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.521512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.386 [2024-11-27 04:53:50.521529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:43.386 [2024-11-27 04:53:50.521537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:31:43.386 [2024-11-27 04:53:50.521543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.569086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.386 [2024-11-27 04:53:50.569114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:43.386 [2024-11-27 04:53:50.569125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.524 ms 00:31:43.386 [2024-11-27 04:53:50.569132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.386 [2024-11-27 04:53:50.577399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:43.645 [2024-11-27 04:53:50.591901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.591926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:43.645 [2024-11-27 04:53:50.591936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.712 ms 00:31:43.645 [2024-11-27 04:53:50.591946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.592033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.592042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:43.645 [2024-11-27 04:53:50.592049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:43.645 [2024-11-27 04:53:50.592056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.592113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.592122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:43.645 [2024-11-27 04:53:50.592129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:43.645 [2024-11-27 04:53:50.592138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.592162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.592170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:43.645 [2024-11-27 04:53:50.592176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:43.645 [2024-11-27 04:53:50.592182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.592212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:43.645 [2024-11-27 04:53:50.592221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.592227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:43.645 [2024-11-27 04:53:50.592233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:43.645 [2024-11-27 04:53:50.592240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.611408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.611431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:43.645 [2024-11-27 04:53:50.611440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.152 ms 00:31:43.645 [2024-11-27 04:53:50.611448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.611522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.645 [2024-11-27 04:53:50.611531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:43.645 [2024-11-27 04:53:50.611538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:43.645 [2024-11-27 04:53:50.611545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.645 [2024-11-27 04:53:50.612569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:43.645 [2024-11-27 04:53:50.614950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 238.442 ms, result 0 00:31:43.645 [2024-11-27 04:53:50.615884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:43.645 [2024-11-27 04:53:50.626514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:44.589  [2024-11-27T04:53:52.843Z] Copying: 30/256 [MB] (30 MBps) [2024-11-27T04:53:53.796Z] Copying: 46/256 [MB] (16 MBps) [2024-11-27T04:53:54.745Z] Copying: 56/256 [MB] (10 MBps) [2024-11-27T04:53:55.690Z] Copying: 69/256 [MB] (12 MBps) [2024-11-27T04:53:56.636Z] Copying: 86/256 [MB] (16 MBps) [2024-11-27T04:53:58.025Z] Copying: 101/256 [MB] (15 MBps) [2024-11-27T04:53:58.966Z] Copying: 120/256 [MB] (18 MBps) [2024-11-27T04:53:59.904Z] Copying: 141/256 [MB] (20 MBps) [2024-11-27T04:54:00.844Z] Copying: 162/256 [MB] (21 MBps) [2024-11-27T04:54:01.786Z] Copying: 188/256 [MB] (25 MBps) [2024-11-27T04:54:02.728Z] Copying: 215/256 [MB] (27 MBps) [2024-11-27T04:54:03.296Z] Copying: 234/256 [MB] (19 MBps) [2024-11-27T04:54:03.296Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-27 04:54:03.197425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:56.093 [2024-11-27 04:54:03.206311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.206346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:56.093 [2024-11-27 04:54:03.206365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:56.093 [2024-11-27 04:54:03.206374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.206395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:56.093 [2024-11-27 04:54:03.208908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.208931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:56.093 [2024-11-27 04:54:03.208942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:31:56.093 [2024-11-27 04:54:03.208951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.209208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.209223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:56.093 [2024-11-27 04:54:03.209231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:31:56.093 [2024-11-27 04:54:03.209238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.212939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.212957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:56.093 [2024-11-27 04:54:03.212967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:31:56.093 [2024-11-27 04:54:03.212975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.219884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.219904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:56.093 [2024-11-27 04:54:03.219912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.892 ms 00:31:56.093 [2024-11-27 04:54:03.219920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.242602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.242631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:56.093 [2024-11-27 04:54:03.242643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:31:56.093 [2024-11-27 04:54:03.242650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.256468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.256494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:56.093 [2024-11-27 04:54:03.256509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.798 ms 00:31:56.093 [2024-11-27 04:54:03.256516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.256645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.256655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:56.093 [2024-11-27 04:54:03.256670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:56.093 [2024-11-27 04:54:03.256677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.093 [2024-11-27 04:54:03.279975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.093 [2024-11-27 04:54:03.279999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:56.093 [2024-11-27 04:54:03.280009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.282 ms 00:31:56.093 [2024-11-27 04:54:03.280016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.356 [2024-11-27 04:54:03.302096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.356 [2024-11-27 04:54:03.302120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:56.356 [2024-11-27 04:54:03.302130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.061 ms 00:31:56.356 [2024-11-27 04:54:03.302137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.356 [2024-11-27 04:54:03.323738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.356 [2024-11-27 04:54:03.323762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:56.356 [2024-11-27 04:54:03.323771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.582 ms 00:31:56.356 [2024-11-27 04:54:03.323778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.356 [2024-11-27 04:54:03.346234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.356 [2024-11-27 04:54:03.346260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:56.356 [2024-11-27 04:54:03.346269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.411 ms 00:31:56.356 [2024-11-27 04:54:03.346276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.356 [2024-11-27 04:54:03.346296] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:56.356 [2024-11-27 04:54:03.346309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:56.356 [2024-11-27 04:54:03.346789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.346998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.347005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.347012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.347019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.347027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:56.357 [2024-11-27 04:54:03.347042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:56.357 [2024-11-27 04:54:03.347050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:56.357 [2024-11-27 04:54:03.347057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:56.357 [2024-11-27 04:54:03.347074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:56.357 [2024-11-27 04:54:03.347082] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:56.357 [2024-11-27 04:54:03.347089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:56.357 [2024-11-27 04:54:03.347096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:56.357 [2024-11-27 04:54:03.347103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:56.357 [2024-11-27 04:54:03.347112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:56.357 [2024-11-27 04:54:03.347119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:56.357 [2024-11-27 04:54:03.347125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:56.357 [2024-11-27 04:54:03.347142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.357 [2024-11-27 04:54:03.347149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:56.357 [2024-11-27 04:54:03.347157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:31:56.357 [2024-11-27 04:54:03.347164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.359636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.357 [2024-11-27 04:54:03.359660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:56.357 [2024-11-27 04:54:03.359670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:31:56.357 [2024-11-27 04:54:03.359679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.360022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.357 [2024-11-27 04:54:03.360035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:56.357 [2024-11-27 04:54:03.360044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:31:56.357 [2024-11-27 04:54:03.360051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.394980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.395009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:56.357 [2024-11-27 04:54:03.395019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.395031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.395138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.395149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:56.357 [2024-11-27 04:54:03.395157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.395165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.395205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.395215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:56.357 [2024-11-27 04:54:03.395222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.395230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.395250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.395257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:56.357 [2024-11-27 04:54:03.395265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.395272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.474947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.474996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:56.357 [2024-11-27 04:54:03.475008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.475016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:56.357 [2024-11-27 04:54:03.544418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:56.357 [2024-11-27 04:54:03.544543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:56.357 [2024-11-27 04:54:03.544607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:56.357 [2024-11-27 04:54:03.544735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:56.357 [2024-11-27 04:54:03.544804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:56.357 [2024-11-27 04:54:03.544879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.357 [2024-11-27 04:54:03.544887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.357 [2024-11-27 04:54:03.544936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.357 [2024-11-27 04:54:03.544950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:56.357 [2024-11-27 04:54:03.544958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.358 [2024-11-27 04:54:03.544967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.358 [2024-11-27 04:54:03.545154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.820 ms, result 0 00:31:57.301 00:31:57.301 00:31:57.301 04:54:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:31:57.301 04:54:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:31:57.873 04:54:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:57.873 [2024-11-27 04:54:04.967556] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:31:57.873 [2024-11-27 04:54:04.967888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77030 ] 00:31:58.134 [2024-11-27 04:54:05.129809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.134 [2024-11-27 04:54:05.263873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.394 [2024-11-27 04:54:05.563432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.394 [2024-11-27 04:54:05.563513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.655 [2024-11-27 04:54:05.725754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.725812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:58.655 [2024-11-27 04:54:05.725828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:58.655 [2024-11-27 04:54:05.725837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.728832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.728876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:58.655 [2024-11-27 04:54:05.728888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:31:58.655 [2024-11-27 04:54:05.728896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.729009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:58.655 [2024-11-27 04:54:05.729796] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:58.655 [2024-11-27 04:54:05.729821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.729829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:58.655 [2024-11-27 04:54:05.729840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:31:58.655 [2024-11-27 04:54:05.729848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.731758] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:58.655 [2024-11-27 04:54:05.745462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.745503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:58.655 [2024-11-27 04:54:05.745517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.705 ms 00:31:58.655 [2024-11-27 04:54:05.745525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.745638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.745650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:58.655 [2024-11-27 04:54:05.745660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:58.655 [2024-11-27 04:54:05.745669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.753558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.753592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:58.655 [2024-11-27 04:54:05.753603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.843 ms 00:31:58.655 [2024-11-27 04:54:05.753611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.655 [2024-11-27 04:54:05.753723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.655 [2024-11-27 04:54:05.753734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:58.656 [2024-11-27 04:54:05.753743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:58.656 [2024-11-27 04:54:05.753752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.753780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.656 [2024-11-27 04:54:05.753789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:58.656 [2024-11-27 04:54:05.753797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:58.656 [2024-11-27 04:54:05.753805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.753828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:58.656 [2024-11-27 04:54:05.757781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.656 [2024-11-27 04:54:05.757813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:58.656 [2024-11-27 04:54:05.757824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:31:58.656 [2024-11-27 04:54:05.757832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.757907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.656 [2024-11-27 04:54:05.757918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:58.656 [2024-11-27 04:54:05.757928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:58.656 [2024-11-27 04:54:05.757936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.757962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:58.656 [2024-11-27 04:54:05.757982] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:58.656 [2024-11-27 04:54:05.758020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:58.656 [2024-11-27 04:54:05.758036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:58.656 [2024-11-27 04:54:05.758156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:58.656 [2024-11-27 04:54:05.758169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:58.656 [2024-11-27 04:54:05.758181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:58.656 [2024-11-27 04:54:05.758195] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:58.656 [2024-11-27 04:54:05.758221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:58.656 [2024-11-27 04:54:05.758229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:58.656 [2024-11-27 04:54:05.758236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:58.656 [2024-11-27 04:54:05.758244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.656 [2024-11-27 04:54:05.758251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:58.656 [2024-11-27 04:54:05.758259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:31:58.656 [2024-11-27 04:54:05.758267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.758354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.656 [2024-11-27 04:54:05.758364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:58.656 [2024-11-27 04:54:05.758372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:58.656 [2024-11-27 04:54:05.758379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.656 [2024-11-27 04:54:05.758482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:58.656 [2024-11-27 04:54:05.758501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:58.656 [2024-11-27 04:54:05.758510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:58.656 [2024-11-27 04:54:05.758535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:58.656 [2024-11-27 04:54:05.758557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.656 [2024-11-27 04:54:05.758571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:58.656 [2024-11-27 04:54:05.758586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:58.656 [2024-11-27 04:54:05.758593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.656 [2024-11-27 04:54:05.758600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:58.656 [2024-11-27 04:54:05.758607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:58.656 [2024-11-27 04:54:05.758614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:58.656 [2024-11-27 04:54:05.758627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:58.656 [2024-11-27 04:54:05.758647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:58.656 [2024-11-27 04:54:05.758667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:58.656 [2024-11-27 04:54:05.758687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:58.656 [2024-11-27 04:54:05.758706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:58.656 [2024-11-27 04:54:05.758726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.656 [2024-11-27 04:54:05.758738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:58.656 [2024-11-27 04:54:05.758744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:58.656 [2024-11-27 04:54:05.758751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.656 [2024-11-27 04:54:05.758763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:58.656 [2024-11-27 04:54:05.758770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:58.656 [2024-11-27 04:54:05.758777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:58.656 [2024-11-27 04:54:05.758790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:58.656 [2024-11-27 04:54:05.758797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:58.656 [2024-11-27 04:54:05.758812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:58.656 [2024-11-27 04:54:05.758822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.656 [2024-11-27 04:54:05.758837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:58.656 [2024-11-27 04:54:05.758844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:58.656 [2024-11-27 04:54:05.758851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:58.656 [2024-11-27 04:54:05.758858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:58.656 [2024-11-27 04:54:05.758864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:58.656 [2024-11-27 04:54:05.758871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:58.656 [2024-11-27 04:54:05.758879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:58.656 [2024-11-27 04:54:05.758889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.656 [2024-11-27 04:54:05.758898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:58.656 [2024-11-27 04:54:05.758906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:58.656 [2024-11-27 04:54:05.758912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:58.656 [2024-11-27 04:54:05.758920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:58.656 [2024-11-27 04:54:05.758927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:58.656 [2024-11-27 04:54:05.758934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:58.656 [2024-11-27 04:54:05.758941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:58.656 [2024-11-27 04:54:05.758948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:58.656 [2024-11-27 04:54:05.758955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:58.656 [2024-11-27 04:54:05.758962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.758969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.758976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.758984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.758991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:58.657 [2024-11-27 04:54:05.759000] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:58.657 [2024-11-27 04:54:05.759008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.759016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:58.657 [2024-11-27 04:54:05.759024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:58.657 [2024-11-27 04:54:05.759031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:58.657 [2024-11-27 04:54:05.759038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:58.657 [2024-11-27 04:54:05.759046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.759057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:58.657 [2024-11-27 04:54:05.759079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:31:58.657 [2024-11-27 04:54:05.759087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.791119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.791162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:58.657 [2024-11-27 04:54:05.791173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.976 ms 00:31:58.657 [2024-11-27 04:54:05.791182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.791324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.791336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:58.657 [2024-11-27 04:54:05.791344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:58.657 [2024-11-27 04:54:05.791352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.837130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.837178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:58.657 [2024-11-27 04:54:05.837195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.754 ms 00:31:58.657 [2024-11-27 04:54:05.837204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.837318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.837356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:58.657 [2024-11-27 04:54:05.837366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:58.657 [2024-11-27 04:54:05.837374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.837886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.837918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:58.657 [2024-11-27 04:54:05.837937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:31:58.657 [2024-11-27 04:54:05.837945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.838120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.838131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:58.657 [2024-11-27 04:54:05.838140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:31:58.657 [2024-11-27 04:54:05.838148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.657 [2024-11-27 04:54:05.854299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.657 [2024-11-27 04:54:05.854336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:58.657 [2024-11-27 04:54:05.854347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.129 ms 00:31:58.657 [2024-11-27 04:54:05.854355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.868853] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:58.918 [2024-11-27 04:54:05.868892] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:58.918 [2024-11-27 04:54:05.868906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.868915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:58.918 [2024-11-27 04:54:05.868924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.438 ms 00:31:58.918 [2024-11-27 04:54:05.868932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.894669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.894711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:58.918 [2024-11-27 04:54:05.894724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.641 ms 00:31:58.918 [2024-11-27 04:54:05.894733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.907307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.907344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:58.918 [2024-11-27 04:54:05.907357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.482 ms 00:31:58.918 [2024-11-27 04:54:05.907364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.919498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.919535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:58.918 [2024-11-27 04:54:05.919547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.050 ms 00:31:58.918 [2024-11-27 04:54:05.919555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.920256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.920281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:58.918 [2024-11-27 04:54:05.920292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:31:58.918 [2024-11-27 04:54:05.920300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.984274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:05.984336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:58.918 [2024-11-27 04:54:05.984352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.944 ms 00:31:58.918 [2024-11-27 04:54:05.984361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:05.995731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:58.918 [2024-11-27 04:54:06.014530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.014574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:58.918 [2024-11-27 04:54:06.014589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.067 ms 00:31:58.918 [2024-11-27 04:54:06.014605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.014699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.014711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:58.918 [2024-11-27 04:54:06.014721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:58.918 [2024-11-27 04:54:06.014729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.014788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.014798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:58.918 [2024-11-27 04:54:06.014807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:58.918 [2024-11-27 04:54:06.014820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.014852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.014861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:58.918 [2024-11-27 04:54:06.014870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:58.918 [2024-11-27 04:54:06.014878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.014916] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:58.918 [2024-11-27 04:54:06.014927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.014935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:58.918 [2024-11-27 04:54:06.014944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:58.918 [2024-11-27 04:54:06.014952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.040500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.040544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:58.918 [2024-11-27 04:54:06.040557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.525 ms 00:31:58.918 [2024-11-27 04:54:06.040567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.040681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.918 [2024-11-27 04:54:06.040692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:58.918 [2024-11-27 04:54:06.040703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:58.918 [2024-11-27 04:54:06.040711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.918 [2024-11-27 04:54:06.042033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:58.918 [2024-11-27 04:54:06.045459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.957 ms, result 0 00:31:58.918 [2024-11-27 04:54:06.046616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:58.918 [2024-11-27 04:54:06.060140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:59.178  [2024-11-27T04:54:06.381Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-11-27 04:54:06.219243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:59.178 [2024-11-27 04:54:06.228491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.228529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:59.178 [2024-11-27 04:54:06.228548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:59.178 [2024-11-27 04:54:06.228557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.228578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:59.178 [2024-11-27 04:54:06.231480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.231510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:59.178 [2024-11-27 04:54:06.231521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:31:59.178 [2024-11-27 04:54:06.231530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.234202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.234237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:59.178 [2024-11-27 04:54:06.234246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:31:59.178 [2024-11-27 04:54:06.234255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.238292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.238316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:59.178 [2024-11-27 04:54:06.238326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:31:59.178 [2024-11-27 04:54:06.238335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.245222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.245252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:59.178 [2024-11-27 04:54:06.245262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.859 ms 00:31:59.178 [2024-11-27 04:54:06.245270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.269719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.269753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:59.178 [2024-11-27 04:54:06.269765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:31:59.178 [2024-11-27 04:54:06.269772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.284765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.284807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:59.178 [2024-11-27 04:54:06.284820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:31:59.178 [2024-11-27 04:54:06.284828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.284971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.284982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:59.178 [2024-11-27 04:54:06.285002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:59.178 [2024-11-27 04:54:06.285010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.310130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.310167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:59.178 [2024-11-27 04:54:06.310179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.102 ms 00:31:59.178 [2024-11-27 04:54:06.310186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.335078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.335115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:59.178 [2024-11-27 04:54:06.335127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.847 ms 00:31:59.178 [2024-11-27 04:54:06.335135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.178 [2024-11-27 04:54:06.358913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.178 [2024-11-27 04:54:06.358953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:59.178 [2024-11-27 04:54:06.358965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.733 ms 00:31:59.178 [2024-11-27 04:54:06.358973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.441 [2024-11-27 04:54:06.382852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.441 [2024-11-27 04:54:06.382890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:59.441 [2024-11-27 04:54:06.382903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.803 ms 00:31:59.441 [2024-11-27 04:54:06.382911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.441 [2024-11-27 04:54:06.382960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:59.441 [2024-11-27 04:54:06.382976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.382988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.382996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:59.441 [2024-11-27 04:54:06.383135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:59.442 [2024-11-27 04:54:06.383777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:59.442 [2024-11-27 04:54:06.383785] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:31:59.442 [2024-11-27 04:54:06.383793] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:59.442 [2024-11-27 04:54:06.383801] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:59.442 [2024-11-27 04:54:06.383809] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:59.442 [2024-11-27 04:54:06.383817] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:59.442 [2024-11-27 04:54:06.383825] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:59.442 [2024-11-27 04:54:06.383833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:59.442 [2024-11-27 04:54:06.383844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:59.442 [2024-11-27 04:54:06.383851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:59.443 [2024-11-27 04:54:06.383858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:59.443 [2024-11-27 04:54:06.383865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.443 [2024-11-27 04:54:06.383873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:59.443 [2024-11-27 04:54:06.383881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:31:59.443 [2024-11-27 04:54:06.383888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.397109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.443 [2024-11-27 04:54:06.397144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:59.443 [2024-11-27 04:54:06.397156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:31:59.443 [2024-11-27 04:54:06.397165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.397585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.443 [2024-11-27 04:54:06.397595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:59.443 [2024-11-27 04:54:06.397604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:31:59.443 [2024-11-27 04:54:06.397612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.436380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.436421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.443 [2024-11-27 04:54:06.436434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.436449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.436538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.436547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.443 [2024-11-27 04:54:06.436556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.436564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.436619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.436628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.443 [2024-11-27 04:54:06.436637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.436645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.436668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.436677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.443 [2024-11-27 04:54:06.436685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.436692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.520684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.520738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.443 [2024-11-27 04:54:06.520752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.520767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.590876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.590931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.443 [2024-11-27 04:54:06.590944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.590953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.443 [2024-11-27 04:54:06.591041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.443 [2024-11-27 04:54:06.591129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.443 [2024-11-27 04:54:06.591259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:59.443 [2024-11-27 04:54:06.591325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.443 [2024-11-27 04:54:06.591394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.443 [2024-11-27 04:54:06.591468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.443 [2024-11-27 04:54:06.591476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.443 [2024-11-27 04:54:06.591484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.443 [2024-11-27 04:54:06.591641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.131 ms, result 0 00:32:00.388 00:32:00.388 00:32:00.388 04:54:07 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77058 00:32:00.388 04:54:07 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77058 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77058 ']' 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.388 04:54:07 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.388 04:54:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:00.388 [2024-11-27 04:54:07.466926] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:32:00.388 [2024-11-27 04:54:07.467086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77058 ] 00:32:00.649 [2024-11-27 04:54:07.627402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.649 [2024-11-27 04:54:07.747912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.291 04:54:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.291 04:54:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:32:01.291 04:54:08 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:01.554 [2024-11-27 04:54:08.661830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:01.554 [2024-11-27 04:54:08.661915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:01.817 [2024-11-27 04:54:08.840854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.817 [2024-11-27 04:54:08.840916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:01.817 [2024-11-27 04:54:08.840933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:01.817 [2024-11-27 04:54:08.840942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.817 [2024-11-27 04:54:08.843893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.817 [2024-11-27 04:54:08.843945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.817 [2024-11-27 04:54:08.843959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:32:01.817 [2024-11-27 04:54:08.843967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.817 [2024-11-27 04:54:08.844100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:01.817 [2024-11-27 04:54:08.844857] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:01.817 [2024-11-27 04:54:08.844896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.817 [2024-11-27 04:54:08.844904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.817 [2024-11-27 04:54:08.844916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:32:01.817 [2024-11-27 04:54:08.844926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.817 [2024-11-27 04:54:08.846698] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:01.817 [2024-11-27 04:54:08.861143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.817 [2024-11-27 04:54:08.861211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:01.817 [2024-11-27 04:54:08.861226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.452 ms 00:32:01.817 [2024-11-27 04:54:08.861236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.817 [2024-11-27 04:54:08.861358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.817 [2024-11-27 04:54:08.861372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:01.817 [2024-11-27 04:54:08.861381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:01.817 [2024-11-27 04:54:08.861391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.817 [2024-11-27 04:54:08.869209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.869254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.818 [2024-11-27 04:54:08.869264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.761 ms 00:32:01.818 [2024-11-27 04:54:08.869275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.869401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.869415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.818 [2024-11-27 04:54:08.869424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:01.818 [2024-11-27 04:54:08.869439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.869469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.869480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:01.818 [2024-11-27 04:54:08.869488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:01.818 [2024-11-27 04:54:08.869498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.869521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:01.818 [2024-11-27 04:54:08.873462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.873501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.818 [2024-11-27 04:54:08.873514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.943 ms 00:32:01.818 [2024-11-27 04:54:08.873521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.873596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.873606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:01.818 [2024-11-27 04:54:08.873622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:01.818 [2024-11-27 04:54:08.873630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.873654] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:01.818 [2024-11-27 04:54:08.873677] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:01.818 [2024-11-27 04:54:08.873724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:01.818 [2024-11-27 04:54:08.873740] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:01.818 [2024-11-27 04:54:08.873849] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:01.818 [2024-11-27 04:54:08.873861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:01.818 [2024-11-27 04:54:08.873878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:01.818 [2024-11-27 04:54:08.873889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:01.818 [2024-11-27 04:54:08.873900] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:01.818 [2024-11-27 04:54:08.873910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:01.818 [2024-11-27 04:54:08.873921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:01.818 [2024-11-27 04:54:08.873929] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:01.818 [2024-11-27 04:54:08.873940] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:01.818 [2024-11-27 04:54:08.873948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.873958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:01.818 [2024-11-27 04:54:08.873966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:32:01.818 [2024-11-27 04:54:08.873978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.874082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.818 [2024-11-27 04:54:08.874095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:01.818 [2024-11-27 04:54:08.874104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:01.818 [2024-11-27 04:54:08.874118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.818 [2024-11-27 04:54:08.874219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:01.818 [2024-11-27 04:54:08.874240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:01.818 [2024-11-27 04:54:08.874249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:01.818 [2024-11-27 04:54:08.874276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:01.818 [2024-11-27 04:54:08.874304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:01.818 [2024-11-27 04:54:08.874320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:01.818 [2024-11-27 04:54:08.874329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:01.818 [2024-11-27 04:54:08.874335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:01.818 [2024-11-27 04:54:08.874344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:01.818 [2024-11-27 04:54:08.874352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:01.818 [2024-11-27 04:54:08.874362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:01.818 [2024-11-27 04:54:08.874379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:01.818 [2024-11-27 04:54:08.874408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:01.818 [2024-11-27 04:54:08.874435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:01.818 [2024-11-27 04:54:08.874457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:01.818 [2024-11-27 04:54:08.874481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:01.818 [2024-11-27 04:54:08.874504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:01.818 [2024-11-27 04:54:08.874520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:01.818 [2024-11-27 04:54:08.874529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:01.818 [2024-11-27 04:54:08.874535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:01.818 [2024-11-27 04:54:08.874544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:01.818 [2024-11-27 04:54:08.874550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:01.818 [2024-11-27 04:54:08.874561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:01.818 [2024-11-27 04:54:08.874576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:01.818 [2024-11-27 04:54:08.874583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:01.818 [2024-11-27 04:54:08.874601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:01.818 [2024-11-27 04:54:08.874610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.818 [2024-11-27 04:54:08.874629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:01.818 [2024-11-27 04:54:08.874637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:01.818 [2024-11-27 04:54:08.874645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:01.818 [2024-11-27 04:54:08.874652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:01.818 [2024-11-27 04:54:08.874660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:01.818 [2024-11-27 04:54:08.874667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:01.818 [2024-11-27 04:54:08.874678] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:01.818 [2024-11-27 04:54:08.874688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.818 [2024-11-27 04:54:08.874700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:01.818 [2024-11-27 04:54:08.874707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:01.818 [2024-11-27 04:54:08.874717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:01.818 [2024-11-27 04:54:08.874725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:01.818 [2024-11-27 04:54:08.874735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:01.819 [2024-11-27 04:54:08.874742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:01.819 [2024-11-27 04:54:08.874751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:01.819 [2024-11-27 04:54:08.874758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:01.819 [2024-11-27 04:54:08.874766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:01.819 [2024-11-27 04:54:08.874775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:01.819 [2024-11-27 04:54:08.874828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:01.819 [2024-11-27 04:54:08.874837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:01.819 [2024-11-27 04:54:08.874857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:01.819 [2024-11-27 04:54:08.874866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:01.819 [2024-11-27 04:54:08.874872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:01.819 [2024-11-27 04:54:08.874882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.874890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:01.819 [2024-11-27 04:54:08.874899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:32:01.819 [2024-11-27 04:54:08.874909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.906188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.906236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.819 [2024-11-27 04:54:08.906250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.216 ms 00:32:01.819 [2024-11-27 04:54:08.906261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.906393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.906404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:01.819 [2024-11-27 04:54:08.906415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:01.819 [2024-11-27 04:54:08.906425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.940953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.941004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.819 [2024-11-27 04:54:08.941018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.502 ms 00:32:01.819 [2024-11-27 04:54:08.941026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.941126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.941138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.819 [2024-11-27 04:54:08.941150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:01.819 [2024-11-27 04:54:08.941158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.941686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.941721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.819 [2024-11-27 04:54:08.941733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:32:01.819 [2024-11-27 04:54:08.941741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.941893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.941906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.819 [2024-11-27 04:54:08.941917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:32:01.819 [2024-11-27 04:54:08.941926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.960312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.960354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.819 [2024-11-27 04:54:08.960369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.360 ms 00:32:01.819 [2024-11-27 04:54:08.960377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:08.987163] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:01.819 [2024-11-27 04:54:08.987222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:01.819 [2024-11-27 04:54:08.987243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:08.987254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:01.819 [2024-11-27 04:54:08.987267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.750 ms 00:32:01.819 [2024-11-27 04:54:08.987281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.819 [2024-11-27 04:54:09.013046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.819 [2024-11-27 04:54:09.013107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:01.819 [2024-11-27 04:54:09.013123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.659 ms 00:32:01.819 [2024-11-27 04:54:09.013132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.081 [2024-11-27 04:54:09.026211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.081 [2024-11-27 04:54:09.026256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:02.081 [2024-11-27 04:54:09.026274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.983 ms 00:32:02.081 [2024-11-27 04:54:09.026283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.081 [2024-11-27 04:54:09.038312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.081 [2024-11-27 04:54:09.038354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:02.081 [2024-11-27 04:54:09.038372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.941 ms 00:32:02.081 [2024-11-27 04:54:09.038379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.081 [2024-11-27 04:54:09.039050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.081 [2024-11-27 04:54:09.039099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:02.081 [2024-11-27 04:54:09.039113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:32:02.081 [2024-11-27 04:54:09.039121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.081 [2024-11-27 04:54:09.103040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.081 [2024-11-27 04:54:09.103119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:02.081 [2024-11-27 04:54:09.103137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.887 ms 00:32:02.081 [2024-11-27 04:54:09.103146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.081 [2024-11-27 04:54:09.114089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:02.082 [2024-11-27 04:54:09.132491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.132555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:02.082 [2024-11-27 04:54:09.132567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.246 ms 00:32:02.082 [2024-11-27 04:54:09.132578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.132672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.132685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:02.082 [2024-11-27 04:54:09.132695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:02.082 [2024-11-27 04:54:09.132706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.132763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.132777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:02.082 [2024-11-27 04:54:09.132786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:02.082 [2024-11-27 04:54:09.132798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.132824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.132835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:02.082 [2024-11-27 04:54:09.132843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:02.082 [2024-11-27 04:54:09.132856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.132890] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:02.082 [2024-11-27 04:54:09.132905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.132916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:02.082 [2024-11-27 04:54:09.132926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:02.082 [2024-11-27 04:54:09.132937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.158659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.158712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:02.082 [2024-11-27 04:54:09.158729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.693 ms 00:32:02.082 [2024-11-27 04:54:09.158737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.158873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.082 [2024-11-27 04:54:09.158886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:02.082 [2024-11-27 04:54:09.158902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:02.082 [2024-11-27 04:54:09.158911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.082 [2024-11-27 04:54:09.160040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:02.082 [2024-11-27 04:54:09.163341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.866 ms, result 0 00:32:02.082 [2024-11-27 04:54:09.165536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:02.082 Some configs were skipped because the RPC state that can call them passed over. 00:32:02.082 04:54:09 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:02.343 [2024-11-27 04:54:09.417907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.343 [2024-11-27 04:54:09.417968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:02.343 [2024-11-27 04:54:09.417981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:32:02.343 [2024-11-27 04:54:09.417992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.343 [2024-11-27 04:54:09.418027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.182 ms, result 0 00:32:02.343 true 00:32:02.343 04:54:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:02.605 [2024-11-27 04:54:09.629370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.605 [2024-11-27 04:54:09.629423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:02.605 [2024-11-27 04:54:09.629437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:32:02.605 [2024-11-27 04:54:09.629445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.605 [2024-11-27 04:54:09.629483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.372 ms, result 0 00:32:02.605 true 00:32:02.605 04:54:09 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77058 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77058 ']' 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77058 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77058 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.605 killing process with pid 77058 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77058' 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77058 00:32:02.605 04:54:09 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77058 00:32:03.550 [2024-11-27 04:54:10.430518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.430603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:03.550 [2024-11-27 04:54:10.430619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:03.550 [2024-11-27 04:54:10.430629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.430656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:03.550 [2024-11-27 04:54:10.433723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.433766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:03.550 [2024-11-27 04:54:10.433784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:32:03.550 [2024-11-27 04:54:10.433793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.434106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.434127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:03.550 [2024-11-27 04:54:10.434139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:32:03.550 [2024-11-27 04:54:10.434148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.438670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.438719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:03.550 [2024-11-27 04:54:10.438731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:32:03.550 [2024-11-27 04:54:10.438740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.445665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.445706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:03.550 [2024-11-27 04:54:10.445720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:32:03.550 [2024-11-27 04:54:10.445728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.456720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.456774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:03.550 [2024-11-27 04:54:10.456790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.927 ms 00:32:03.550 [2024-11-27 04:54:10.456797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.466285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.466337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:03.550 [2024-11-27 04:54:10.466351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.434 ms 00:32:03.550 [2024-11-27 04:54:10.466359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.466510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.466522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:03.550 [2024-11-27 04:54:10.466535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:32:03.550 [2024-11-27 04:54:10.466543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.477684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.477727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:03.550 [2024-11-27 04:54:10.477740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.116 ms 00:32:03.550 [2024-11-27 04:54:10.477748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.488285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.488328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:03.550 [2024-11-27 04:54:10.488344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.486 ms 00:32:03.550 [2024-11-27 04:54:10.488352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.498418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.498461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:03.550 [2024-11-27 04:54:10.498473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.012 ms 00:32:03.550 [2024-11-27 04:54:10.498480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.508414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.550 [2024-11-27 04:54:10.508457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:03.550 [2024-11-27 04:54:10.508470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.856 ms 00:32:03.550 [2024-11-27 04:54:10.508477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.550 [2024-11-27 04:54:10.508523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:03.550 [2024-11-27 04:54:10.508538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:03.550 [2024-11-27 04:54:10.508631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.508994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:03.551 [2024-11-27 04:54:10.509476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:03.552 [2024-11-27 04:54:10.509486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:03.552 [2024-11-27 04:54:10.509494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:03.552 [2024-11-27 04:54:10.509503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:03.552 [2024-11-27 04:54:10.509527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:03.552 [2024-11-27 04:54:10.509539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:32:03.552 [2024-11-27 04:54:10.509551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:03.552 [2024-11-27 04:54:10.509560] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:03.552 [2024-11-27 04:54:10.509568] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:03.552 [2024-11-27 04:54:10.509579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:03.552 [2024-11-27 04:54:10.509587] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:03.552 [2024-11-27 04:54:10.509600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:03.552 [2024-11-27 04:54:10.509607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:03.552 [2024-11-27 04:54:10.509617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:03.552 [2024-11-27 04:54:10.509624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:03.552 [2024-11-27 04:54:10.509634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.552 [2024-11-27 04:54:10.509645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:03.552 [2024-11-27 04:54:10.509656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:32:03.552 [2024-11-27 04:54:10.509667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.523479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.552 [2024-11-27 04:54:10.523523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:03.552 [2024-11-27 04:54:10.523539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.768 ms 00:32:03.552 [2024-11-27 04:54:10.523547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.523962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.552 [2024-11-27 04:54:10.523984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:03.552 [2024-11-27 04:54:10.524000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:32:03.552 [2024-11-27 04:54:10.524008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.571984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.572029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:03.552 [2024-11-27 04:54:10.572043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.572051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.572190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.572200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:03.552 [2024-11-27 04:54:10.572212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.572219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.572274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.572284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:03.552 [2024-11-27 04:54:10.572295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.572302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.572320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.572327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:03.552 [2024-11-27 04:54:10.572335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.572344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.636498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.636544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:03.552 [2024-11-27 04:54:10.636556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.636563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.685895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.685932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:03.552 [2024-11-27 04:54:10.685944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.685951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:03.552 [2024-11-27 04:54:10.686032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:03.552 [2024-11-27 04:54:10.686093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:03.552 [2024-11-27 04:54:10.686187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:03.552 [2024-11-27 04:54:10.686233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:03.552 [2024-11-27 04:54:10.686285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.552 [2024-11-27 04:54:10.686331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:03.552 [2024-11-27 04:54:10.686338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.552 [2024-11-27 04:54:10.686345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.552 [2024-11-27 04:54:10.686451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 255.923 ms, result 0 00:32:04.121 04:54:11 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:04.121 [2024-11-27 04:54:11.277359] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:32:04.121 [2024-11-27 04:54:11.277476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77112 ] 00:32:04.381 [2024-11-27 04:54:11.433185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.381 [2024-11-27 04:54:11.518456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.640 [2024-11-27 04:54:11.727035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.640 [2024-11-27 04:54:11.727096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.900 [2024-11-27 04:54:11.874724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.874767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:04.900 [2024-11-27 04:54:11.874778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:04.900 [2024-11-27 04:54:11.874784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.876870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.876901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.900 [2024-11-27 04:54:11.876908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:32:04.900 [2024-11-27 04:54:11.876914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.876969] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:04.900 [2024-11-27 04:54:11.877543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:04.900 [2024-11-27 04:54:11.877560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.877567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.900 [2024-11-27 04:54:11.877573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:32:04.900 [2024-11-27 04:54:11.877579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.878581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:04.900 [2024-11-27 04:54:11.888022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.888051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:04.900 [2024-11-27 04:54:11.888060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.442 ms 00:32:04.900 [2024-11-27 04:54:11.888073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.888134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.888143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:04.900 [2024-11-27 04:54:11.888150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:04.900 [2024-11-27 04:54:11.888156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.892378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.892401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.900 [2024-11-27 04:54:11.892408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:32:04.900 [2024-11-27 04:54:11.892414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.892487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.892495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.900 [2024-11-27 04:54:11.892501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:04.900 [2024-11-27 04:54:11.892507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.892527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.892533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:04.900 [2024-11-27 04:54:11.892539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:04.900 [2024-11-27 04:54:11.892545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.892562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:04.900 [2024-11-27 04:54:11.895122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.895145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.900 [2024-11-27 04:54:11.895152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:32:04.900 [2024-11-27 04:54:11.895158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.895185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.900 [2024-11-27 04:54:11.895192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:04.900 [2024-11-27 04:54:11.895198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:04.900 [2024-11-27 04:54:11.895203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.900 [2024-11-27 04:54:11.895218] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:04.901 [2024-11-27 04:54:11.895231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:04.901 [2024-11-27 04:54:11.895257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:04.901 [2024-11-27 04:54:11.895269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:04.901 [2024-11-27 04:54:11.895347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:04.901 [2024-11-27 04:54:11.895356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:04.901 [2024-11-27 04:54:11.895364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:04.901 [2024-11-27 04:54:11.895373] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895380] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:04.901 [2024-11-27 04:54:11.895392] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:04.901 [2024-11-27 04:54:11.895397] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:04.901 [2024-11-27 04:54:11.895403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:04.901 [2024-11-27 04:54:11.895409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.901 [2024-11-27 04:54:11.895414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:04.901 [2024-11-27 04:54:11.895420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:32:04.901 [2024-11-27 04:54:11.895426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.901 [2024-11-27 04:54:11.895492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.901 [2024-11-27 04:54:11.895500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:04.901 [2024-11-27 04:54:11.895506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:04.901 [2024-11-27 04:54:11.895511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.901 [2024-11-27 04:54:11.895586] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:04.901 [2024-11-27 04:54:11.895593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:04.901 [2024-11-27 04:54:11.895600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:04.901 [2024-11-27 04:54:11.895617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:04.901 [2024-11-27 04:54:11.895633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.901 [2024-11-27 04:54:11.895643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:04.901 [2024-11-27 04:54:11.895653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:04.901 [2024-11-27 04:54:11.895658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.901 [2024-11-27 04:54:11.895664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:04.901 [2024-11-27 04:54:11.895669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:04.901 [2024-11-27 04:54:11.895674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:04.901 [2024-11-27 04:54:11.895685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:04.901 [2024-11-27 04:54:11.895701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:04.901 [2024-11-27 04:54:11.895715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:04.901 [2024-11-27 04:54:11.895730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:04.901 [2024-11-27 04:54:11.895745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:04.901 [2024-11-27 04:54:11.895759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.901 [2024-11-27 04:54:11.895770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:04.901 [2024-11-27 04:54:11.895775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:04.901 [2024-11-27 04:54:11.895779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.901 [2024-11-27 04:54:11.895784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:04.901 [2024-11-27 04:54:11.895790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:04.901 [2024-11-27 04:54:11.895794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:04.901 [2024-11-27 04:54:11.895805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:04.901 [2024-11-27 04:54:11.895810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:04.901 [2024-11-27 04:54:11.895820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:04.901 [2024-11-27 04:54:11.895827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.901 [2024-11-27 04:54:11.895838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:04.901 [2024-11-27 04:54:11.895845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:04.901 [2024-11-27 04:54:11.895851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:04.901 [2024-11-27 04:54:11.895856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:04.901 [2024-11-27 04:54:11.895861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:04.901 [2024-11-27 04:54:11.895866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:04.901 [2024-11-27 04:54:11.895872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:04.901 [2024-11-27 04:54:11.895878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:04.901 [2024-11-27 04:54:11.895890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:04.901 [2024-11-27 04:54:11.895896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:04.901 [2024-11-27 04:54:11.895901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:04.901 [2024-11-27 04:54:11.895906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:04.901 [2024-11-27 04:54:11.895911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:04.901 [2024-11-27 04:54:11.895916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:04.901 [2024-11-27 04:54:11.895922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:04.901 [2024-11-27 04:54:11.895927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:04.901 [2024-11-27 04:54:11.895932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:04.901 [2024-11-27 04:54:11.895959] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:04.901 [2024-11-27 04:54:11.895965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:04.901 [2024-11-27 04:54:11.895976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:04.901 [2024-11-27 04:54:11.895982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:04.901 [2024-11-27 04:54:11.895987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:04.902 [2024-11-27 04:54:11.895993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.896000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:04.902 [2024-11-27 04:54:11.896006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:32:04.902 [2024-11-27 04:54:11.896011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.916486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.916514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.902 [2024-11-27 04:54:11.916522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.436 ms 00:32:04.902 [2024-11-27 04:54:11.916528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.916623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.916631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:04.902 [2024-11-27 04:54:11.916637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:04.902 [2024-11-27 04:54:11.916643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.956628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.956661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.902 [2024-11-27 04:54:11.956673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.968 ms 00:32:04.902 [2024-11-27 04:54:11.956680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.956737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.956746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.902 [2024-11-27 04:54:11.956753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:04.902 [2024-11-27 04:54:11.956759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.957043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.957054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.902 [2024-11-27 04:54:11.957062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:32:04.902 [2024-11-27 04:54:11.957083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.957186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.957198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.902 [2024-11-27 04:54:11.957205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:04.902 [2024-11-27 04:54:11.957211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.967871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.967898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.902 [2024-11-27 04:54:11.967906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.644 ms 00:32:04.902 [2024-11-27 04:54:11.967912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.977631] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:04.902 [2024-11-27 04:54:11.977658] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:04.902 [2024-11-27 04:54:11.977667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.977673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:04.902 [2024-11-27 04:54:11.977680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.669 ms 00:32:04.902 [2024-11-27 04:54:11.977686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:11.996337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:11.996372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:04.902 [2024-11-27 04:54:11.996383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.603 ms 00:32:04.902 [2024-11-27 04:54:11.996390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.005185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.005212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:04.902 [2024-11-27 04:54:12.005220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.733 ms 00:32:04.902 [2024-11-27 04:54:12.005226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.013901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.013925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:04.902 [2024-11-27 04:54:12.013933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.633 ms 00:32:04.902 [2024-11-27 04:54:12.013938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.014404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.014425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:04.902 [2024-11-27 04:54:12.014433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:32:04.902 [2024-11-27 04:54:12.014438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.057509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.057550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:04.902 [2024-11-27 04:54:12.057560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.052 ms 00:32:04.902 [2024-11-27 04:54:12.057566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.065703] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:04.902 [2024-11-27 04:54:12.077102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.077131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:04.902 [2024-11-27 04:54:12.077141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.458 ms 00:32:04.902 [2024-11-27 04:54:12.077150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.077224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.077232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:04.902 [2024-11-27 04:54:12.077239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:04.902 [2024-11-27 04:54:12.077245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.077282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.077289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:04.902 [2024-11-27 04:54:12.077296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:04.902 [2024-11-27 04:54:12.077304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.077337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.077344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:04.902 [2024-11-27 04:54:12.077350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:04.902 [2024-11-27 04:54:12.077355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.077378] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:04.902 [2024-11-27 04:54:12.077385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.077391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:04.902 [2024-11-27 04:54:12.077397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:04.902 [2024-11-27 04:54:12.077403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.095073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.095101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:04.902 [2024-11-27 04:54:12.095110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.655 ms 00:32:04.902 [2024-11-27 04:54:12.095116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.095187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.902 [2024-11-27 04:54:12.095195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:04.902 [2024-11-27 04:54:12.095202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:04.902 [2024-11-27 04:54:12.095208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.902 [2024-11-27 04:54:12.095873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:04.902 [2024-11-27 04:54:12.098249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.927 ms, result 0 00:32:04.902 [2024-11-27 04:54:12.099199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:05.163 [2024-11-27 04:54:12.113870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:06.108  [2024-11-27T04:54:14.257Z] Copying: 16/256 [MB] (16 MBps) [2024-11-27T04:54:15.202Z] Copying: 26/256 [MB] (10 MBps) [2024-11-27T04:54:16.592Z] Copying: 37/256 [MB] (10 MBps) [2024-11-27T04:54:17.167Z] Copying: 47/256 [MB] (10 MBps) [2024-11-27T04:54:18.556Z] Copying: 58/256 [MB] (11 MBps) [2024-11-27T04:54:19.495Z] Copying: 69/256 [MB] (11 MBps) [2024-11-27T04:54:20.429Z] Copying: 90/256 [MB] (21 MBps) [2024-11-27T04:54:21.366Z] Copying: 128/256 [MB] (38 MBps) [2024-11-27T04:54:22.305Z] Copying: 165/256 [MB] (36 MBps) [2024-11-27T04:54:23.245Z] Copying: 197/256 [MB] (32 MBps) [2024-11-27T04:54:24.189Z] Copying: 228/256 [MB] (30 MBps) [2024-11-27T04:54:25.132Z] Copying: 241/256 [MB] (13 MBps) [2024-11-27T04:54:25.392Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-27 04:54:25.171506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:18.189 [2024-11-27 04:54:25.183127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.183182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:18.189 [2024-11-27 04:54:25.183212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:18.189 [2024-11-27 04:54:25.183222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.183251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:18.189 [2024-11-27 04:54:25.186595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.186643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:18.189 [2024-11-27 04:54:25.186656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:32:18.189 [2024-11-27 04:54:25.186665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.186974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.186989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:18.189 [2024-11-27 04:54:25.186999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:32:18.189 [2024-11-27 04:54:25.187009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.191440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.191475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:18.189 [2024-11-27 04:54:25.191487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:32:18.189 [2024-11-27 04:54:25.191496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.199377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.199431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:18.189 [2024-11-27 04:54:25.199444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.855 ms 00:32:18.189 [2024-11-27 04:54:25.199455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.226669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.226722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:18.189 [2024-11-27 04:54:25.226737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.127 ms 00:32:18.189 [2024-11-27 04:54:25.226747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.243843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.243892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:18.189 [2024-11-27 04:54:25.243914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.041 ms 00:32:18.189 [2024-11-27 04:54:25.243924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.244120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.244135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:18.189 [2024-11-27 04:54:25.244161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:32:18.189 [2024-11-27 04:54:25.244170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.270510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.270557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:18.189 [2024-11-27 04:54:25.270570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.319 ms 00:32:18.189 [2024-11-27 04:54:25.270577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.296033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.296109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:18.189 [2024-11-27 04:54:25.296122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.391 ms 00:32:18.189 [2024-11-27 04:54:25.296130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.321013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.189 [2024-11-27 04:54:25.321062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:18.189 [2024-11-27 04:54:25.321085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.832 ms 00:32:18.189 [2024-11-27 04:54:25.321093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.189 [2024-11-27 04:54:25.345663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.190 [2024-11-27 04:54:25.345710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:18.190 [2024-11-27 04:54:25.345723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:32:18.190 [2024-11-27 04:54:25.345732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.190 [2024-11-27 04:54:25.345781] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:18.190 [2024-11-27 04:54:25.345800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.345999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:18.190 [2024-11-27 04:54:25.346346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:18.191 [2024-11-27 04:54:25.346672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:18.191 [2024-11-27 04:54:25.346682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 507cd504-226e-4c6b-9d3c-7332f33276de 00:32:18.191 [2024-11-27 04:54:25.346691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:18.191 [2024-11-27 04:54:25.346700] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:18.191 [2024-11-27 04:54:25.346719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:18.191 [2024-11-27 04:54:25.346729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:18.191 [2024-11-27 04:54:25.346738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:18.191 [2024-11-27 04:54:25.346747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:18.191 [2024-11-27 04:54:25.346760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:18.191 [2024-11-27 04:54:25.346767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:18.191 [2024-11-27 04:54:25.346773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:18.191 [2024-11-27 04:54:25.346784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.191 [2024-11-27 04:54:25.346794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:18.191 [2024-11-27 04:54:25.346804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:32:18.191 [2024-11-27 04:54:25.346811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.191 [2024-11-27 04:54:25.361358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.191 [2024-11-27 04:54:25.361402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:18.191 [2024-11-27 04:54:25.361416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.511 ms 00:32:18.191 [2024-11-27 04:54:25.361425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.191 [2024-11-27 04:54:25.361870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.191 [2024-11-27 04:54:25.361884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:18.191 [2024-11-27 04:54:25.361894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:32:18.191 [2024-11-27 04:54:25.361903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.403971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.404021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:18.452 [2024-11-27 04:54:25.404035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.452 [2024-11-27 04:54:25.404052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.404198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.404212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:18.452 [2024-11-27 04:54:25.404224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.452 [2024-11-27 04:54:25.404234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.404294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.404305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:18.452 [2024-11-27 04:54:25.404315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.452 [2024-11-27 04:54:25.404325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.404349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.404361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:18.452 [2024-11-27 04:54:25.404372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.452 [2024-11-27 04:54:25.404380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.495071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.495141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:18.452 [2024-11-27 04:54:25.495156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.452 [2024-11-27 04:54:25.495165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.452 [2024-11-27 04:54:25.569742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.452 [2024-11-27 04:54:25.569812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:18.453 [2024-11-27 04:54:25.569827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.569837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.569959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.569972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:18.453 [2024-11-27 04:54:25.569982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.569994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.570048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:18.453 [2024-11-27 04:54:25.570058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.570096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.570224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:18.453 [2024-11-27 04:54:25.570234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.570244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.570296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:18.453 [2024-11-27 04:54:25.570310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.570321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.570388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:18.453 [2024-11-27 04:54:25.570397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.570406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.453 [2024-11-27 04:54:25.570482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:18.453 [2024-11-27 04:54:25.570493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.453 [2024-11-27 04:54:25.570502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.453 [2024-11-27 04:54:25.570696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.587 ms, result 0 00:32:19.394 00:32:19.394 00:32:19.394 04:54:26 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:19.966 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:32:19.966 04:54:26 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:32:19.966 04:54:26 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:32:19.966 04:54:26 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:19.966 04:54:26 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:19.966 04:54:26 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:32:19.966 04:54:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:19.967 04:54:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77058 00:32:19.967 04:54:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77058 ']' 00:32:19.967 Process with pid 77058 is not found 00:32:19.967 04:54:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77058 00:32:19.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77058) - No such process 00:32:19.967 04:54:27 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77058 is not found' 00:32:19.967 00:32:19.967 real 1m17.015s 00:32:19.967 user 1m33.354s 00:32:19.967 sys 0m10.645s 00:32:19.967 04:54:27 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.967 ************************************ 00:32:19.967 END TEST ftl_trim 00:32:19.967 ************************************ 00:32:19.967 04:54:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:19.967 04:54:27 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:32:19.967 04:54:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:19.967 04:54:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.967 04:54:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:19.967 ************************************ 00:32:19.967 START TEST ftl_restore 00:32:19.967 ************************************ 00:32:19.967 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:32:20.228 * Looking for test storage... 00:32:20.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.228 04:54:27 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:20.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.228 --rc genhtml_branch_coverage=1 00:32:20.228 --rc genhtml_function_coverage=1 00:32:20.228 --rc genhtml_legend=1 00:32:20.228 --rc geninfo_all_blocks=1 00:32:20.228 --rc geninfo_unexecuted_blocks=1 00:32:20.228 00:32:20.228 ' 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:20.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.228 --rc genhtml_branch_coverage=1 00:32:20.228 --rc genhtml_function_coverage=1 00:32:20.228 --rc genhtml_legend=1 00:32:20.228 --rc geninfo_all_blocks=1 00:32:20.228 --rc geninfo_unexecuted_blocks=1 00:32:20.228 00:32:20.228 ' 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:20.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.228 --rc genhtml_branch_coverage=1 00:32:20.228 --rc genhtml_function_coverage=1 00:32:20.228 --rc genhtml_legend=1 00:32:20.228 --rc geninfo_all_blocks=1 00:32:20.228 --rc geninfo_unexecuted_blocks=1 00:32:20.228 00:32:20.228 ' 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:20.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.228 --rc genhtml_branch_coverage=1 00:32:20.228 --rc genhtml_function_coverage=1 00:32:20.228 --rc genhtml_legend=1 00:32:20.228 --rc geninfo_all_blocks=1 00:32:20.228 --rc geninfo_unexecuted_blocks=1 00:32:20.228 00:32:20.228 ' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FGA2RCDWDN 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77350 00:32:20.228 04:54:27 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77350 00:32:20.228 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77350 ']' 00:32:20.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.229 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.229 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.229 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.229 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.229 04:54:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:32:20.229 04:54:27 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.490 [2024-11-27 04:54:27.435163] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:32:20.490 [2024-11-27 04:54:27.435871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77350 ] 00:32:20.490 [2024-11-27 04:54:27.600585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.750 [2024-11-27 04:54:27.747633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:32:21.692 04:54:28 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:32:21.692 04:54:28 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:21.953 { 00:32:21.953 "name": "nvme0n1", 00:32:21.953 "aliases": [ 00:32:21.953 "fd6d5288-7a36-4d1d-affc-95b537ada904" 00:32:21.953 ], 00:32:21.953 "product_name": "NVMe disk", 00:32:21.953 "block_size": 4096, 00:32:21.953 "num_blocks": 1310720, 00:32:21.953 "uuid": "fd6d5288-7a36-4d1d-affc-95b537ada904", 00:32:21.953 "numa_id": -1, 00:32:21.953 "assigned_rate_limits": { 00:32:21.953 "rw_ios_per_sec": 0, 00:32:21.953 "rw_mbytes_per_sec": 0, 00:32:21.953 "r_mbytes_per_sec": 0, 00:32:21.953 "w_mbytes_per_sec": 0 00:32:21.953 }, 00:32:21.953 "claimed": true, 00:32:21.953 "claim_type": "read_many_write_one", 00:32:21.953 "zoned": false, 00:32:21.953 "supported_io_types": { 00:32:21.953 "read": true, 00:32:21.953 "write": true, 00:32:21.953 "unmap": true, 00:32:21.953 "flush": true, 00:32:21.953 "reset": true, 00:32:21.953 "nvme_admin": true, 00:32:21.953 "nvme_io": true, 00:32:21.953 "nvme_io_md": false, 00:32:21.953 "write_zeroes": true, 00:32:21.953 "zcopy": false, 00:32:21.953 "get_zone_info": false, 00:32:21.953 "zone_management": false, 00:32:21.953 "zone_append": false, 00:32:21.953 "compare": true, 00:32:21.953 "compare_and_write": false, 00:32:21.953 "abort": true, 00:32:21.953 "seek_hole": false, 00:32:21.953 "seek_data": false, 00:32:21.953 "copy": true, 00:32:21.953 "nvme_iov_md": false 00:32:21.953 }, 00:32:21.953 "driver_specific": { 00:32:21.953 "nvme": [ 00:32:21.953 { 00:32:21.953 "pci_address": "0000:00:11.0", 00:32:21.953 "trid": { 00:32:21.953 "trtype": "PCIe", 00:32:21.953 "traddr": "0000:00:11.0" 00:32:21.953 }, 00:32:21.953 "ctrlr_data": { 00:32:21.953 "cntlid": 0, 00:32:21.953 "vendor_id": "0x1b36", 00:32:21.953 "model_number": "QEMU NVMe Ctrl", 00:32:21.953 "serial_number": "12341", 00:32:21.953 "firmware_revision": "8.0.0", 00:32:21.953 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:21.953 "oacs": { 00:32:21.953 "security": 0, 00:32:21.953 "format": 1, 00:32:21.953 "firmware": 0, 00:32:21.953 "ns_manage": 1 00:32:21.953 }, 00:32:21.953 "multi_ctrlr": false, 00:32:21.953 "ana_reporting": false 00:32:21.953 }, 00:32:21.953 "vs": { 00:32:21.953 "nvme_version": "1.4" 00:32:21.953 }, 00:32:21.953 "ns_data": { 00:32:21.953 "id": 1, 00:32:21.953 "can_share": false 00:32:21.953 } 00:32:21.953 } 00:32:21.953 ], 00:32:21.953 "mp_policy": "active_passive" 00:32:21.953 } 00:32:21.953 } 00:32:21.953 ]' 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:21.953 04:54:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:32:21.953 04:54:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:32:21.953 04:54:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:21.953 04:54:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:32:21.953 04:54:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:21.953 04:54:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:22.225 04:54:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=59f2548c-f5a4-4cf8-9bca-a458bff5c981 00:32:22.225 04:54:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:32:22.225 04:54:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59f2548c-f5a4-4cf8-9bca-a458bff5c981 00:32:22.485 04:54:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:22.745 04:54:29 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b2a62cea-fe9a-4e90-9b2b-7d890fe61b51 00:32:22.745 04:54:29 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2a62cea-fe9a-4e90-9b2b-7d890fe61b51 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:32:23.005 04:54:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.005 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.005 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:23.005 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:32:23.005 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:32:23.005 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:23.266 { 00:32:23.266 "name": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:23.266 "aliases": [ 00:32:23.266 "lvs/nvme0n1p0" 00:32:23.266 ], 00:32:23.266 "product_name": "Logical Volume", 00:32:23.266 "block_size": 4096, 00:32:23.266 "num_blocks": 26476544, 00:32:23.266 "uuid": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:23.266 "assigned_rate_limits": { 00:32:23.266 "rw_ios_per_sec": 0, 00:32:23.266 "rw_mbytes_per_sec": 0, 00:32:23.266 "r_mbytes_per_sec": 0, 00:32:23.266 "w_mbytes_per_sec": 0 00:32:23.266 }, 00:32:23.266 "claimed": false, 00:32:23.266 "zoned": false, 00:32:23.266 "supported_io_types": { 00:32:23.266 "read": true, 00:32:23.266 "write": true, 00:32:23.266 "unmap": true, 00:32:23.266 "flush": false, 00:32:23.266 "reset": true, 00:32:23.266 "nvme_admin": false, 00:32:23.266 "nvme_io": false, 00:32:23.266 "nvme_io_md": false, 00:32:23.266 "write_zeroes": true, 00:32:23.266 "zcopy": false, 00:32:23.266 "get_zone_info": false, 00:32:23.266 "zone_management": false, 00:32:23.266 "zone_append": false, 00:32:23.266 "compare": false, 00:32:23.266 "compare_and_write": false, 00:32:23.266 "abort": false, 00:32:23.266 "seek_hole": true, 00:32:23.266 "seek_data": true, 00:32:23.266 "copy": false, 00:32:23.266 "nvme_iov_md": false 00:32:23.266 }, 00:32:23.266 "driver_specific": { 00:32:23.266 "lvol": { 00:32:23.266 "lvol_store_uuid": "b2a62cea-fe9a-4e90-9b2b-7d890fe61b51", 00:32:23.266 "base_bdev": "nvme0n1", 00:32:23.266 "thin_provision": true, 00:32:23.266 "num_allocated_clusters": 0, 00:32:23.266 "snapshot": false, 00:32:23.266 "clone": false, 00:32:23.266 "esnap_clone": false 00:32:23.266 } 00:32:23.266 } 00:32:23.266 } 00:32:23.266 ]' 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:23.266 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:32:23.266 04:54:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:32:23.266 04:54:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:32:23.266 04:54:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:23.525 04:54:30 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:23.525 04:54:30 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:23.525 04:54:30 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.525 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.525 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:23.525 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:32:23.525 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:32:23.525 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:23.783 { 00:32:23.783 "name": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:23.783 "aliases": [ 00:32:23.783 "lvs/nvme0n1p0" 00:32:23.783 ], 00:32:23.783 "product_name": "Logical Volume", 00:32:23.783 "block_size": 4096, 00:32:23.783 "num_blocks": 26476544, 00:32:23.783 "uuid": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:23.783 "assigned_rate_limits": { 00:32:23.783 "rw_ios_per_sec": 0, 00:32:23.783 "rw_mbytes_per_sec": 0, 00:32:23.783 "r_mbytes_per_sec": 0, 00:32:23.783 "w_mbytes_per_sec": 0 00:32:23.783 }, 00:32:23.783 "claimed": false, 00:32:23.783 "zoned": false, 00:32:23.783 "supported_io_types": { 00:32:23.783 "read": true, 00:32:23.783 "write": true, 00:32:23.783 "unmap": true, 00:32:23.783 "flush": false, 00:32:23.783 "reset": true, 00:32:23.783 "nvme_admin": false, 00:32:23.783 "nvme_io": false, 00:32:23.783 "nvme_io_md": false, 00:32:23.783 "write_zeroes": true, 00:32:23.783 "zcopy": false, 00:32:23.783 "get_zone_info": false, 00:32:23.783 "zone_management": false, 00:32:23.783 "zone_append": false, 00:32:23.783 "compare": false, 00:32:23.783 "compare_and_write": false, 00:32:23.783 "abort": false, 00:32:23.783 "seek_hole": true, 00:32:23.783 "seek_data": true, 00:32:23.783 "copy": false, 00:32:23.783 "nvme_iov_md": false 00:32:23.783 }, 00:32:23.783 "driver_specific": { 00:32:23.783 "lvol": { 00:32:23.783 "lvol_store_uuid": "b2a62cea-fe9a-4e90-9b2b-7d890fe61b51", 00:32:23.783 "base_bdev": "nvme0n1", 00:32:23.783 "thin_provision": true, 00:32:23.783 "num_allocated_clusters": 0, 00:32:23.783 "snapshot": false, 00:32:23.783 "clone": false, 00:32:23.783 "esnap_clone": false 00:32:23.783 } 00:32:23.783 } 00:32:23.783 } 00:32:23.783 ]' 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:23.783 04:54:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:32:23.783 04:54:30 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:32:23.783 04:54:30 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:24.043 04:54:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:24.043 04:54:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:24.043 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:24.043 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:24.043 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:32:24.043 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:32:24.043 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48bb3d41-c6a9-41b7-b0de-8601f12ad64c 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:24.302 { 00:32:24.302 "name": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:24.302 "aliases": [ 00:32:24.302 "lvs/nvme0n1p0" 00:32:24.302 ], 00:32:24.302 "product_name": "Logical Volume", 00:32:24.302 "block_size": 4096, 00:32:24.302 "num_blocks": 26476544, 00:32:24.302 "uuid": "48bb3d41-c6a9-41b7-b0de-8601f12ad64c", 00:32:24.302 "assigned_rate_limits": { 00:32:24.302 "rw_ios_per_sec": 0, 00:32:24.302 "rw_mbytes_per_sec": 0, 00:32:24.302 "r_mbytes_per_sec": 0, 00:32:24.302 "w_mbytes_per_sec": 0 00:32:24.302 }, 00:32:24.302 "claimed": false, 00:32:24.302 "zoned": false, 00:32:24.302 "supported_io_types": { 00:32:24.302 "read": true, 00:32:24.302 "write": true, 00:32:24.302 "unmap": true, 00:32:24.302 "flush": false, 00:32:24.302 "reset": true, 00:32:24.302 "nvme_admin": false, 00:32:24.302 "nvme_io": false, 00:32:24.302 "nvme_io_md": false, 00:32:24.302 "write_zeroes": true, 00:32:24.302 "zcopy": false, 00:32:24.302 "get_zone_info": false, 00:32:24.302 "zone_management": false, 00:32:24.302 "zone_append": false, 00:32:24.302 "compare": false, 00:32:24.302 "compare_and_write": false, 00:32:24.302 "abort": false, 00:32:24.302 "seek_hole": true, 00:32:24.302 "seek_data": true, 00:32:24.302 "copy": false, 00:32:24.302 "nvme_iov_md": false 00:32:24.302 }, 00:32:24.302 "driver_specific": { 00:32:24.302 "lvol": { 00:32:24.302 "lvol_store_uuid": "b2a62cea-fe9a-4e90-9b2b-7d890fe61b51", 00:32:24.302 "base_bdev": "nvme0n1", 00:32:24.302 "thin_provision": true, 00:32:24.302 "num_allocated_clusters": 0, 00:32:24.302 "snapshot": false, 00:32:24.302 "clone": false, 00:32:24.302 "esnap_clone": false 00:32:24.302 } 00:32:24.302 } 00:32:24.302 } 00:32:24.302 ]' 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:24.302 04:54:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:32:24.302 04:54:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:24.302 04:54:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 48bb3d41-c6a9-41b7-b0de-8601f12ad64c --l2p_dram_limit 10' 00:32:24.302 04:54:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:24.303 04:54:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:24.303 04:54:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:24.303 04:54:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:32:24.303 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:32:24.303 04:54:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48bb3d41-c6a9-41b7-b0de-8601f12ad64c --l2p_dram_limit 10 -c nvc0n1p0 00:32:24.563 [2024-11-27 04:54:31.529825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.529867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:24.563 [2024-11-27 04:54:31.529881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:24.563 [2024-11-27 04:54:31.529888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.529940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.529949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.563 [2024-11-27 04:54:31.529957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:24.563 [2024-11-27 04:54:31.529963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.529980] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:24.563 [2024-11-27 04:54:31.530516] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:24.563 [2024-11-27 04:54:31.530533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.530540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.563 [2024-11-27 04:54:31.530548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:32:24.563 [2024-11-27 04:54:31.530554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.530583] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1070351d-6946-4b84-87eb-caed8417ea7c 00:32:24.563 [2024-11-27 04:54:31.531839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.531866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:24.563 [2024-11-27 04:54:31.531875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:24.563 [2024-11-27 04:54:31.531887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.538761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.538792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.563 [2024-11-27 04:54:31.538800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:32:24.563 [2024-11-27 04:54:31.538808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.538914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.538924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.563 [2024-11-27 04:54:31.538931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:24.563 [2024-11-27 04:54:31.538942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.538975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.538984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:24.563 [2024-11-27 04:54:31.538993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:24.563 [2024-11-27 04:54:31.539000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.539017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:24.563 [2024-11-27 04:54:31.542273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.542296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.563 [2024-11-27 04:54:31.542307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:32:24.563 [2024-11-27 04:54:31.542314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.542343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.542350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:24.563 [2024-11-27 04:54:31.542358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:24.563 [2024-11-27 04:54:31.542364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.542389] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:24.563 [2024-11-27 04:54:31.542499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:24.563 [2024-11-27 04:54:31.542512] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:24.563 [2024-11-27 04:54:31.542521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:24.563 [2024-11-27 04:54:31.542531] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:24.563 [2024-11-27 04:54:31.542538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:24.563 [2024-11-27 04:54:31.542546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:24.563 [2024-11-27 04:54:31.542554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:24.563 [2024-11-27 04:54:31.542562] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:24.563 [2024-11-27 04:54:31.542567] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:24.563 [2024-11-27 04:54:31.542575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.542586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:24.563 [2024-11-27 04:54:31.542594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:32:24.563 [2024-11-27 04:54:31.542600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.563 [2024-11-27 04:54:31.542668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.563 [2024-11-27 04:54:31.542679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:24.564 [2024-11-27 04:54:31.542687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:24.564 [2024-11-27 04:54:31.542693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.564 [2024-11-27 04:54:31.542775] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:24.564 [2024-11-27 04:54:31.542782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:24.564 [2024-11-27 04:54:31.542790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:24.564 [2024-11-27 04:54:31.542808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:24.564 [2024-11-27 04:54:31.542829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.564 [2024-11-27 04:54:31.542843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:24.564 [2024-11-27 04:54:31.542848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:24.564 [2024-11-27 04:54:31.542855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.564 [2024-11-27 04:54:31.542860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:24.564 [2024-11-27 04:54:31.542867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:24.564 [2024-11-27 04:54:31.542872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:24.564 [2024-11-27 04:54:31.542885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:24.564 [2024-11-27 04:54:31.542904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:24.564 [2024-11-27 04:54:31.542921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:24.564 [2024-11-27 04:54:31.542937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:24.564 [2024-11-27 04:54:31.542954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.564 [2024-11-27 04:54:31.542966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:24.564 [2024-11-27 04:54:31.542974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:24.564 [2024-11-27 04:54:31.542979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.564 [2024-11-27 04:54:31.542986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:24.564 [2024-11-27 04:54:31.542991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:24.564 [2024-11-27 04:54:31.542997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.564 [2024-11-27 04:54:31.543001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:24.564 [2024-11-27 04:54:31.543008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:24.564 [2024-11-27 04:54:31.543014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.543020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:24.564 [2024-11-27 04:54:31.543025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:24.564 [2024-11-27 04:54:31.543032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.543037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:24.564 [2024-11-27 04:54:31.543045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:24.564 [2024-11-27 04:54:31.543050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.564 [2024-11-27 04:54:31.543059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.564 [2024-11-27 04:54:31.543074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:24.564 [2024-11-27 04:54:31.543083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:24.564 [2024-11-27 04:54:31.543088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:24.564 [2024-11-27 04:54:31.543095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:24.564 [2024-11-27 04:54:31.543101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:24.564 [2024-11-27 04:54:31.543107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:24.564 [2024-11-27 04:54:31.543116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:24.564 [2024-11-27 04:54:31.543126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:24.564 [2024-11-27 04:54:31.543140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:24.564 [2024-11-27 04:54:31.543146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:24.564 [2024-11-27 04:54:31.543153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:24.564 [2024-11-27 04:54:31.543158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:24.564 [2024-11-27 04:54:31.543165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:24.564 [2024-11-27 04:54:31.543170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:24.564 [2024-11-27 04:54:31.543178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:24.564 [2024-11-27 04:54:31.543183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:24.564 [2024-11-27 04:54:31.543192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:24.564 [2024-11-27 04:54:31.543223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:24.564 [2024-11-27 04:54:31.543232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.564 [2024-11-27 04:54:31.543246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:24.564 [2024-11-27 04:54:31.543251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:24.564 [2024-11-27 04:54:31.543259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:24.564 [2024-11-27 04:54:31.543265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.564 [2024-11-27 04:54:31.543273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:24.564 [2024-11-27 04:54:31.543279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:32:24.564 [2024-11-27 04:54:31.543285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.564 [2024-11-27 04:54:31.543326] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:24.564 [2024-11-27 04:54:31.543338] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:28.769 [2024-11-27 04:54:35.256573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.256624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:28.769 [2024-11-27 04:54:35.256637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3713.233 ms 00:32:28.769 [2024-11-27 04:54:35.256646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.280236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.280275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:28.769 [2024-11-27 04:54:35.280286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.417 ms 00:32:28.769 [2024-11-27 04:54:35.280294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.280385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.280394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:28.769 [2024-11-27 04:54:35.280401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:28.769 [2024-11-27 04:54:35.280413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.306917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.306947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:28.769 [2024-11-27 04:54:35.306956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.472 ms 00:32:28.769 [2024-11-27 04:54:35.306964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.306990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.306999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:28.769 [2024-11-27 04:54:35.307006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:28.769 [2024-11-27 04:54:35.307018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.307421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.307439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:28.769 [2024-11-27 04:54:35.307446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:32:28.769 [2024-11-27 04:54:35.307454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.307538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.307549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:28.769 [2024-11-27 04:54:35.307555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:28.769 [2024-11-27 04:54:35.307564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.320522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.320549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:28.769 [2024-11-27 04:54:35.320557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:32:28.769 [2024-11-27 04:54:35.320565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.345564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:28.769 [2024-11-27 04:54:35.348872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.348902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:28.769 [2024-11-27 04:54:35.348916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.236 ms 00:32:28.769 [2024-11-27 04:54:35.348925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.413053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.413091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:28.769 [2024-11-27 04:54:35.413105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.090 ms 00:32:28.769 [2024-11-27 04:54:35.413112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.413264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.413273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:28.769 [2024-11-27 04:54:35.413283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:28.769 [2024-11-27 04:54:35.413290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.431059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.431095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:28.769 [2024-11-27 04:54:35.431107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.719 ms 00:32:28.769 [2024-11-27 04:54:35.431114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.448234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.448257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:28.769 [2024-11-27 04:54:35.448268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.088 ms 00:32:28.769 [2024-11-27 04:54:35.448274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.448720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.448735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:28.769 [2024-11-27 04:54:35.448745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:32:28.769 [2024-11-27 04:54:35.448752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.510633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.510658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:28.769 [2024-11-27 04:54:35.510669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.857 ms 00:32:28.769 [2024-11-27 04:54:35.510676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.529966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.529991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:28.769 [2024-11-27 04:54:35.530002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.237 ms 00:32:28.769 [2024-11-27 04:54:35.530008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.547675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.547699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:28.769 [2024-11-27 04:54:35.547709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.637 ms 00:32:28.769 [2024-11-27 04:54:35.547715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.565659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.565685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:28.769 [2024-11-27 04:54:35.565695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.913 ms 00:32:28.769 [2024-11-27 04:54:35.565701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.565734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.565741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:28.769 [2024-11-27 04:54:35.565752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:28.769 [2024-11-27 04:54:35.565758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.565821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.769 [2024-11-27 04:54:35.565831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:28.769 [2024-11-27 04:54:35.565839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:28.769 [2024-11-27 04:54:35.565845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.769 [2024-11-27 04:54:35.566710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4036.506 ms, result 0 00:32:28.769 { 00:32:28.769 "name": "ftl0", 00:32:28.769 "uuid": "1070351d-6946-4b84-87eb-caed8417ea7c" 00:32:28.769 } 00:32:28.769 04:54:35 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:28.769 04:54:35 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:28.769 04:54:35 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:32:28.769 04:54:35 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:29.032 [2024-11-27 04:54:35.986246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:35.986287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:29.032 [2024-11-27 04:54:35.986297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:29.032 [2024-11-27 04:54:35.986305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:35.986325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:29.032 [2024-11-27 04:54:35.988629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:35.988651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:29.032 [2024-11-27 04:54:35.988661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:32:29.032 [2024-11-27 04:54:35.988668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:35.988877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:35.988890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:29.032 [2024-11-27 04:54:35.988898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:32:29.032 [2024-11-27 04:54:35.988905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:35.991354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:35.991369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:29.032 [2024-11-27 04:54:35.991379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:32:29.032 [2024-11-27 04:54:35.991386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:35.996144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:35.996165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:29.032 [2024-11-27 04:54:35.996175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:32:29.032 [2024-11-27 04:54:35.996181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.014909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.014932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:29.032 [2024-11-27 04:54:36.014941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.671 ms 00:32:29.032 [2024-11-27 04:54:36.014947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.027268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.027292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:29.032 [2024-11-27 04:54:36.027302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:32:29.032 [2024-11-27 04:54:36.027310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.027414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.027422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:29.032 [2024-11-27 04:54:36.027430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:29.032 [2024-11-27 04:54:36.027436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.045157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.045179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:29.032 [2024-11-27 04:54:36.045189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.703 ms 00:32:29.032 [2024-11-27 04:54:36.045194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.063030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.063052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:29.032 [2024-11-27 04:54:36.063060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.807 ms 00:32:29.032 [2024-11-27 04:54:36.063072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.080156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.080177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:29.032 [2024-11-27 04:54:36.080186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.055 ms 00:32:29.032 [2024-11-27 04:54:36.080192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.097137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.032 [2024-11-27 04:54:36.097159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:29.032 [2024-11-27 04:54:36.097168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.890 ms 00:32:29.032 [2024-11-27 04:54:36.097174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.032 [2024-11-27 04:54:36.097201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:29.032 [2024-11-27 04:54:36.097212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:29.032 [2024-11-27 04:54:36.097384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:29.033 [2024-11-27 04:54:36.097907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:29.033 [2024-11-27 04:54:36.097914] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1070351d-6946-4b84-87eb-caed8417ea7c 00:32:29.033 [2024-11-27 04:54:36.097920] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:29.033 [2024-11-27 04:54:36.097930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:29.033 [2024-11-27 04:54:36.097938] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:29.033 [2024-11-27 04:54:36.097945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:29.033 [2024-11-27 04:54:36.097951] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:29.033 [2024-11-27 04:54:36.097958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:29.033 [2024-11-27 04:54:36.097964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:29.033 [2024-11-27 04:54:36.097970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:29.033 [2024-11-27 04:54:36.097974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:29.033 [2024-11-27 04:54:36.097982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.033 [2024-11-27 04:54:36.097988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:29.033 [2024-11-27 04:54:36.097996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:32:29.033 [2024-11-27 04:54:36.098003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.033 [2024-11-27 04:54:36.108085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.033 [2024-11-27 04:54:36.108104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:29.033 [2024-11-27 04:54:36.108114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.057 ms 00:32:29.034 [2024-11-27 04:54:36.108120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.108413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.034 [2024-11-27 04:54:36.108421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:29.034 [2024-11-27 04:54:36.108431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:32:29.034 [2024-11-27 04:54:36.108437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.143497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.034 [2024-11-27 04:54:36.143523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:29.034 [2024-11-27 04:54:36.143534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.034 [2024-11-27 04:54:36.143540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.143589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.034 [2024-11-27 04:54:36.143597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:29.034 [2024-11-27 04:54:36.143606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.034 [2024-11-27 04:54:36.143612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.143672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.034 [2024-11-27 04:54:36.143680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:29.034 [2024-11-27 04:54:36.143688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.034 [2024-11-27 04:54:36.143694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.143711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.034 [2024-11-27 04:54:36.143717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:29.034 [2024-11-27 04:54:36.143724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.034 [2024-11-27 04:54:36.143731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.034 [2024-11-27 04:54:36.205850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.034 [2024-11-27 04:54:36.205880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:29.034 [2024-11-27 04:54:36.205891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.034 [2024-11-27 04:54:36.205899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.256810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.256844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:29.293 [2024-11-27 04:54:36.256858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.256865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.256940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.256948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:29.293 [2024-11-27 04:54:36.256957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.256963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.257025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:29.293 [2024-11-27 04:54:36.257033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.257039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.257147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:29.293 [2024-11-27 04:54:36.257155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.257161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.257199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:29.293 [2024-11-27 04:54:36.257207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.257212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.257257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:29.293 [2024-11-27 04:54:36.257265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.257271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.293 [2024-11-27 04:54:36.257330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:29.293 [2024-11-27 04:54:36.257339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.293 [2024-11-27 04:54:36.257344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.293 [2024-11-27 04:54:36.257467] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.185 ms, result 0 00:32:29.293 true 00:32:29.293 04:54:36 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77350 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77350 ']' 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77350 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77350 00:32:29.293 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.294 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.294 killing process with pid 77350 00:32:29.294 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77350' 00:32:29.294 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77350 00:32:29.294 04:54:36 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77350 00:32:33.506 04:54:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:37.704 262144+0 records in 00:32:37.704 262144+0 records out 00:32:37.704 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.90081 s, 275 MB/s 00:32:37.704 04:54:44 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:39.081 04:54:46 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:39.081 [2024-11-27 04:54:46.134186] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:32:39.081 [2024-11-27 04:54:46.134271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77577 ] 00:32:39.342 [2024-11-27 04:54:46.288560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.342 [2024-11-27 04:54:46.405853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.604 [2024-11-27 04:54:46.732990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.604 [2024-11-27 04:54:46.733108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.867 [2024-11-27 04:54:46.897206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.897283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:39.867 [2024-11-27 04:54:46.897300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:39.867 [2024-11-27 04:54:46.897311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.897379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.897394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:39.867 [2024-11-27 04:54:46.897404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:39.867 [2024-11-27 04:54:46.897412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.897434] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:39.867 [2024-11-27 04:54:46.898157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:39.867 [2024-11-27 04:54:46.898188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.898198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:39.867 [2024-11-27 04:54:46.898209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:32:39.867 [2024-11-27 04:54:46.898218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.900421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:39.867 [2024-11-27 04:54:46.915563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.915615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:39.867 [2024-11-27 04:54:46.915630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.143 ms 00:32:39.867 [2024-11-27 04:54:46.915639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.915725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.915735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:39.867 [2024-11-27 04:54:46.915745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:39.867 [2024-11-27 04:54:46.915754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.927077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.927120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:39.867 [2024-11-27 04:54:46.927133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.230 ms 00:32:39.867 [2024-11-27 04:54:46.927149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.927235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.927245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:39.867 [2024-11-27 04:54:46.927256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:39.867 [2024-11-27 04:54:46.927265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.927326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.927338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:39.867 [2024-11-27 04:54:46.927347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:39.867 [2024-11-27 04:54:46.927356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.927384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:39.867 [2024-11-27 04:54:46.931920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.931964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:39.867 [2024-11-27 04:54:46.931979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:32:39.867 [2024-11-27 04:54:46.931988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.932031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.867 [2024-11-27 04:54:46.932041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:39.867 [2024-11-27 04:54:46.932051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:39.867 [2024-11-27 04:54:46.932059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.867 [2024-11-27 04:54:46.932110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:39.867 [2024-11-27 04:54:46.932140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:39.867 [2024-11-27 04:54:46.932183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:39.867 [2024-11-27 04:54:46.932207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:39.867 [2024-11-27 04:54:46.932319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:39.867 [2024-11-27 04:54:46.932331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:39.867 [2024-11-27 04:54:46.932342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:39.867 [2024-11-27 04:54:46.932353] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:39.867 [2024-11-27 04:54:46.932364] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:39.868 [2024-11-27 04:54:46.932382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:39.868 [2024-11-27 04:54:46.932393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:39.868 [2024-11-27 04:54:46.932402] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:39.868 [2024-11-27 04:54:46.932411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.868 [2024-11-27 04:54:46.932421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:39.868 [2024-11-27 04:54:46.932430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:32:39.868 [2024-11-27 04:54:46.932438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.868 [2024-11-27 04:54:46.932522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.868 [2024-11-27 04:54:46.932530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:39.868 [2024-11-27 04:54:46.932539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:39.868 [2024-11-27 04:54:46.932546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.868 [2024-11-27 04:54:46.932658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:39.868 [2024-11-27 04:54:46.932670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:39.868 [2024-11-27 04:54:46.932679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:39.868 [2024-11-27 04:54:46.932706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:39.868 [2024-11-27 04:54:46.932732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.868 [2024-11-27 04:54:46.932748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:39.868 [2024-11-27 04:54:46.932754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:39.868 [2024-11-27 04:54:46.932766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.868 [2024-11-27 04:54:46.932781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:39.868 [2024-11-27 04:54:46.932792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:39.868 [2024-11-27 04:54:46.932800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:39.868 [2024-11-27 04:54:46.932816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:39.868 [2024-11-27 04:54:46.932838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:39.868 [2024-11-27 04:54:46.932861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:39.868 [2024-11-27 04:54:46.932881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:39.868 [2024-11-27 04:54:46.932901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.868 [2024-11-27 04:54:46.932914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:39.868 [2024-11-27 04:54:46.932921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.868 [2024-11-27 04:54:46.932934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:39.868 [2024-11-27 04:54:46.932940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:39.868 [2024-11-27 04:54:46.932946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.868 [2024-11-27 04:54:46.932954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:39.868 [2024-11-27 04:54:46.932961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:39.868 [2024-11-27 04:54:46.932968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:39.868 [2024-11-27 04:54:46.932981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:39.868 [2024-11-27 04:54:46.932988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.932995] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:39.868 [2024-11-27 04:54:46.933006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:39.868 [2024-11-27 04:54:46.933015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.868 [2024-11-27 04:54:46.933023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.868 [2024-11-27 04:54:46.933032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:39.868 [2024-11-27 04:54:46.933041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:39.868 [2024-11-27 04:54:46.933049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:39.868 [2024-11-27 04:54:46.933057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:39.868 [2024-11-27 04:54:46.933079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:39.868 [2024-11-27 04:54:46.933105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:39.868 [2024-11-27 04:54:46.933115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:39.868 [2024-11-27 04:54:46.933126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:39.868 [2024-11-27 04:54:46.933148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:39.868 [2024-11-27 04:54:46.933156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:39.868 [2024-11-27 04:54:46.933165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:39.868 [2024-11-27 04:54:46.933174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:39.868 [2024-11-27 04:54:46.933181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:39.868 [2024-11-27 04:54:46.933190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:39.868 [2024-11-27 04:54:46.933198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:39.868 [2024-11-27 04:54:46.933206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:39.868 [2024-11-27 04:54:46.933214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:39.868 [2024-11-27 04:54:46.933253] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:39.868 [2024-11-27 04:54:46.933261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:39.868 [2024-11-27 04:54:46.933277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:39.868 [2024-11-27 04:54:46.933285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:39.868 [2024-11-27 04:54:46.933294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:39.868 [2024-11-27 04:54:46.933302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.868 [2024-11-27 04:54:46.933313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:39.868 [2024-11-27 04:54:46.933336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:32:39.868 [2024-11-27 04:54:46.933344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.868 [2024-11-27 04:54:46.971186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.868 [2024-11-27 04:54:46.971239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.868 [2024-11-27 04:54:46.971253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.791 ms 00:32:39.868 [2024-11-27 04:54:46.971267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.868 [2024-11-27 04:54:46.971367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.868 [2024-11-27 04:54:46.971377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:39.868 [2024-11-27 04:54:46.971387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:39.868 [2024-11-27 04:54:46.971396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.021628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.021688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.869 [2024-11-27 04:54:47.021702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.167 ms 00:32:39.869 [2024-11-27 04:54:47.021712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.021766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.021777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.869 [2024-11-27 04:54:47.021792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:39.869 [2024-11-27 04:54:47.021800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.022568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.022612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.869 [2024-11-27 04:54:47.022623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:32:39.869 [2024-11-27 04:54:47.022632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.022806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.022818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.869 [2024-11-27 04:54:47.022834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:32:39.869 [2024-11-27 04:54:47.022844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.041294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.041354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.869 [2024-11-27 04:54:47.041366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.427 ms 00:32:39.869 [2024-11-27 04:54:47.041374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.869 [2024-11-27 04:54:47.056910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:39.869 [2024-11-27 04:54:47.056965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:39.869 [2024-11-27 04:54:47.056981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.869 [2024-11-27 04:54:47.056990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:39.869 [2024-11-27 04:54:47.057001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.487 ms 00:32:39.869 [2024-11-27 04:54:47.057009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.082997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.083055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:40.131 [2024-11-27 04:54:47.083076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.932 ms 00:32:40.131 [2024-11-27 04:54:47.083085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.096320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.096369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:40.131 [2024-11-27 04:54:47.096380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.180 ms 00:32:40.131 [2024-11-27 04:54:47.096389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.109094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.109141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:40.131 [2024-11-27 04:54:47.109154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:32:40.131 [2024-11-27 04:54:47.109161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.109857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.109892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:40.131 [2024-11-27 04:54:47.109903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:32:40.131 [2024-11-27 04:54:47.109915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.182740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.182802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:40.131 [2024-11-27 04:54:47.182817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.804 ms 00:32:40.131 [2024-11-27 04:54:47.182833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.194711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:40.131 [2024-11-27 04:54:47.198878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.198924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:40.131 [2024-11-27 04:54:47.198936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:32:40.131 [2024-11-27 04:54:47.198945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.199035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.199047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:40.131 [2024-11-27 04:54:47.199059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:40.131 [2024-11-27 04:54:47.199085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.199174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.199186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:40.131 [2024-11-27 04:54:47.199196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:40.131 [2024-11-27 04:54:47.199205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.199236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.199245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:40.131 [2024-11-27 04:54:47.199255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:40.131 [2024-11-27 04:54:47.199263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.199307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:40.131 [2024-11-27 04:54:47.199323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.199332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:40.131 [2024-11-27 04:54:47.199341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:40.131 [2024-11-27 04:54:47.199350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.225405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.225458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:40.131 [2024-11-27 04:54:47.225473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:32:40.131 [2024-11-27 04:54:47.225488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.225579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.131 [2024-11-27 04:54:47.225592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:40.131 [2024-11-27 04:54:47.225603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:40.131 [2024-11-27 04:54:47.225612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.131 [2024-11-27 04:54:47.227918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.150 ms, result 0 00:32:41.073  [2024-11-27T04:54:49.659Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T04:54:50.601Z] Copying: 32/1024 [MB] (19 MBps) [2024-11-27T04:54:51.545Z] Copying: 54/1024 [MB] (22 MBps) [2024-11-27T04:54:52.491Z] Copying: 74/1024 [MB] (19 MBps) [2024-11-27T04:54:53.435Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-27T04:54:54.378Z] Copying: 117/1024 [MB] (19 MBps) [2024-11-27T04:54:55.322Z] Copying: 135/1024 [MB] (18 MBps) [2024-11-27T04:54:56.265Z] Copying: 155/1024 [MB] (19 MBps) [2024-11-27T04:54:57.653Z] Copying: 174/1024 [MB] (18 MBps) [2024-11-27T04:54:58.598Z] Copying: 194/1024 [MB] (20 MBps) [2024-11-27T04:54:59.541Z] Copying: 211/1024 [MB] (16 MBps) [2024-11-27T04:55:00.486Z] Copying: 232/1024 [MB] (21 MBps) [2024-11-27T04:55:01.428Z] Copying: 244/1024 [MB] (12 MBps) [2024-11-27T04:55:02.371Z] Copying: 266/1024 [MB] (21 MBps) [2024-11-27T04:55:03.317Z] Copying: 283/1024 [MB] (17 MBps) [2024-11-27T04:55:04.261Z] Copying: 297/1024 [MB] (14 MBps) [2024-11-27T04:55:05.646Z] Copying: 313/1024 [MB] (15 MBps) [2024-11-27T04:55:06.591Z] Copying: 327/1024 [MB] (13 MBps) [2024-11-27T04:55:07.535Z] Copying: 342/1024 [MB] (15 MBps) [2024-11-27T04:55:08.479Z] Copying: 359/1024 [MB] (17 MBps) [2024-11-27T04:55:09.423Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-27T04:55:10.368Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-27T04:55:11.312Z] Copying: 395/1024 [MB] (12 MBps) [2024-11-27T04:55:12.257Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-27T04:55:13.646Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-27T04:55:14.590Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-27T04:55:15.536Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-27T04:55:16.480Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-27T04:55:17.423Z] Copying: 462/1024 [MB] (11 MBps) [2024-11-27T04:55:18.366Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-27T04:55:19.309Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-27T04:55:20.347Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-27T04:55:21.298Z] Copying: 507/1024 [MB] (11 MBps) [2024-11-27T04:55:22.241Z] Copying: 521/1024 [MB] (13 MBps) [2024-11-27T04:55:23.631Z] Copying: 544/1024 [MB] (23 MBps) [2024-11-27T04:55:24.573Z] Copying: 558/1024 [MB] (13 MBps) [2024-11-27T04:55:25.514Z] Copying: 580/1024 [MB] (22 MBps) [2024-11-27T04:55:26.456Z] Copying: 593/1024 [MB] (13 MBps) [2024-11-27T04:55:27.400Z] Copying: 609/1024 [MB] (15 MBps) [2024-11-27T04:55:28.346Z] Copying: 626/1024 [MB] (17 MBps) [2024-11-27T04:55:29.291Z] Copying: 646/1024 [MB] (20 MBps) [2024-11-27T04:55:30.681Z] Copying: 663/1024 [MB] (16 MBps) [2024-11-27T04:55:31.255Z] Copying: 679/1024 [MB] (15 MBps) [2024-11-27T04:55:32.645Z] Copying: 696/1024 [MB] (16 MBps) [2024-11-27T04:55:33.591Z] Copying: 708/1024 [MB] (12 MBps) [2024-11-27T04:55:34.537Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-27T04:55:35.481Z] Copying: 732/1024 [MB] (13 MBps) [2024-11-27T04:55:36.415Z] Copying: 747/1024 [MB] (14 MBps) [2024-11-27T04:55:37.360Z] Copying: 785/1024 [MB] (38 MBps) [2024-11-27T04:55:38.300Z] Copying: 802/1024 [MB] (16 MBps) [2024-11-27T04:55:39.242Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-27T04:55:40.630Z] Copying: 835/1024 [MB] (21 MBps) [2024-11-27T04:55:41.576Z] Copying: 850/1024 [MB] (15 MBps) [2024-11-27T04:55:42.522Z] Copying: 860/1024 [MB] (10 MBps) [2024-11-27T04:55:43.468Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-27T04:55:44.410Z] Copying: 901792/1048576 [kB] (10224 kBps) [2024-11-27T04:55:45.355Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-27T04:55:46.299Z] Copying: 902/1024 [MB] (11 MBps) [2024-11-27T04:55:47.244Z] Copying: 933896/1048576 [kB] (10132 kBps) [2024-11-27T04:55:48.650Z] Copying: 922/1024 [MB] (10 MBps) [2024-11-27T04:55:49.610Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-27T04:55:50.555Z] Copying: 951/1024 [MB] (17 MBps) [2024-11-27T04:55:51.501Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-27T04:55:52.447Z] Copying: 974/1024 [MB] (10 MBps) [2024-11-27T04:55:53.392Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-27T04:55:54.336Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-27T04:55:55.282Z] Copying: 1006/1024 [MB] (10 MBps) [2024-11-27T04:55:56.225Z] Copying: 1016/1024 [MB] (10 MBps) [2024-11-27T04:55:56.225Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 04:55:55.897052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.897133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:49.022 [2024-11-27 04:55:55.897149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:49.022 [2024-11-27 04:55:55.897159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.897183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:49.022 [2024-11-27 04:55:55.900169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.900207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:49.022 [2024-11-27 04:55:55.900226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:33:49.022 [2024-11-27 04:55:55.900235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.902702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.902749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:49.022 [2024-11-27 04:55:55.902761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:33:49.022 [2024-11-27 04:55:55.902769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.920585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.920632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:49.022 [2024-11-27 04:55:55.920644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.798 ms 00:33:49.022 [2024-11-27 04:55:55.920653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.926795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.926834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:49.022 [2024-11-27 04:55:55.926847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:33:49.022 [2024-11-27 04:55:55.926856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.953180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.953226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:49.022 [2024-11-27 04:55:55.953239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.261 ms 00:33:49.022 [2024-11-27 04:55:55.953246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.969172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.969216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:49.022 [2024-11-27 04:55:55.969228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.882 ms 00:33:49.022 [2024-11-27 04:55:55.969236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.969382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.022 [2024-11-27 04:55:55.969401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:49.022 [2024-11-27 04:55:55.969411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:33:49.022 [2024-11-27 04:55:55.969419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.022 [2024-11-27 04:55:55.994483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.023 [2024-11-27 04:55:55.994528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:49.023 [2024-11-27 04:55:55.994539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.049 ms 00:33:49.023 [2024-11-27 04:55:55.994546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.023 [2024-11-27 04:55:56.019512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.023 [2024-11-27 04:55:56.019552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:49.023 [2024-11-27 04:55:56.019563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.922 ms 00:33:49.023 [2024-11-27 04:55:56.019570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.023 [2024-11-27 04:55:56.043255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.023 [2024-11-27 04:55:56.043297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:49.023 [2024-11-27 04:55:56.043306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.643 ms 00:33:49.023 [2024-11-27 04:55:56.043312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.023 [2024-11-27 04:55:56.061677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.023 [2024-11-27 04:55:56.061713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:49.023 [2024-11-27 04:55:56.061722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.304 ms 00:33:49.023 [2024-11-27 04:55:56.061728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.023 [2024-11-27 04:55:56.061763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:49.023 [2024-11-27 04:55:56.061776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.061999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:49.023 [2024-11-27 04:55:56.062246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:49.024 [2024-11-27 04:55:56.062400] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:49.024 [2024-11-27 04:55:56.062410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1070351d-6946-4b84-87eb-caed8417ea7c 00:33:49.024 [2024-11-27 04:55:56.062416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:49.024 [2024-11-27 04:55:56.062422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:49.024 [2024-11-27 04:55:56.062428] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:49.024 [2024-11-27 04:55:56.062435] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:49.024 [2024-11-27 04:55:56.062440] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:49.024 [2024-11-27 04:55:56.062452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:49.024 [2024-11-27 04:55:56.062458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:49.024 [2024-11-27 04:55:56.062463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:49.024 [2024-11-27 04:55:56.062467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:49.024 [2024-11-27 04:55:56.062473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.024 [2024-11-27 04:55:56.062479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:49.024 [2024-11-27 04:55:56.062485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:33:49.024 [2024-11-27 04:55:56.062491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.073033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.024 [2024-11-27 04:55:56.073089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:49.024 [2024-11-27 04:55:56.073098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:33:49.024 [2024-11-27 04:55:56.073105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.073414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.024 [2024-11-27 04:55:56.073429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:49.024 [2024-11-27 04:55:56.073436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:33:49.024 [2024-11-27 04:55:56.073447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.101026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.101060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:49.024 [2024-11-27 04:55:56.101076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.101083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.101132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.101139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:49.024 [2024-11-27 04:55:56.101145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.101155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.101197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.101205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:49.024 [2024-11-27 04:55:56.101211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.101218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.101229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.101236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:49.024 [2024-11-27 04:55:56.101242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.101248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.162873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.162909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:49.024 [2024-11-27 04:55:56.162919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.162925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.211799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.211830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:49.024 [2024-11-27 04:55:56.211838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.211848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.211913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.211921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:49.024 [2024-11-27 04:55:56.211927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.211933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.211958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.211965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:49.024 [2024-11-27 04:55:56.211971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.211977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.212046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.212054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:49.024 [2024-11-27 04:55:56.212060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.212077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.212101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.212108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:49.024 [2024-11-27 04:55:56.212114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.212120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.212146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.212155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:49.024 [2024-11-27 04:55:56.212161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.212167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.212198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:49.024 [2024-11-27 04:55:56.212205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:49.024 [2024-11-27 04:55:56.212212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:49.024 [2024-11-27 04:55:56.212217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.024 [2024-11-27 04:55:56.212307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 315.240 ms, result 0 00:33:49.594 00:33:49.594 00:33:49.594 04:55:56 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:49.853 [2024-11-27 04:55:56.831455] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:33:49.853 [2024-11-27 04:55:56.831573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78307 ] 00:33:49.853 [2024-11-27 04:55:56.986816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.111 [2024-11-27 04:55:57.062760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.111 [2024-11-27 04:55:57.270025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.111 [2024-11-27 04:55:57.270090] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.370 [2024-11-27 04:55:57.416878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.416918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:50.370 [2024-11-27 04:55:57.416928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.370 [2024-11-27 04:55:57.416934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.416967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.416976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:50.370 [2024-11-27 04:55:57.416982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:50.370 [2024-11-27 04:55:57.416988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.417000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:50.370 [2024-11-27 04:55:57.417518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:50.370 [2024-11-27 04:55:57.417536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.417542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:50.370 [2024-11-27 04:55:57.417549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:33:50.370 [2024-11-27 04:55:57.417555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.418489] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:50.370 [2024-11-27 04:55:57.427895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.427924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:50.370 [2024-11-27 04:55:57.427932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.408 ms 00:33:50.370 [2024-11-27 04:55:57.427938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.427980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.427988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:50.370 [2024-11-27 04:55:57.427994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:50.370 [2024-11-27 04:55:57.428000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.432284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.432309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:50.370 [2024-11-27 04:55:57.432316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:33:50.370 [2024-11-27 04:55:57.432325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.432378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.432384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:50.370 [2024-11-27 04:55:57.432391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:50.370 [2024-11-27 04:55:57.432396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.432432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.432439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:50.370 [2024-11-27 04:55:57.432445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.370 [2024-11-27 04:55:57.432451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.432466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:50.370 [2024-11-27 04:55:57.435099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.435123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:50.370 [2024-11-27 04:55:57.435133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:33:50.370 [2024-11-27 04:55:57.435138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.435162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.435168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:50.370 [2024-11-27 04:55:57.435175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:50.370 [2024-11-27 04:55:57.435180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.435194] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:50.370 [2024-11-27 04:55:57.435208] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:50.370 [2024-11-27 04:55:57.435235] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:50.370 [2024-11-27 04:55:57.435247] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:50.370 [2024-11-27 04:55:57.435325] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:50.370 [2024-11-27 04:55:57.435333] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:50.370 [2024-11-27 04:55:57.435341] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:50.370 [2024-11-27 04:55:57.435348] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:50.370 [2024-11-27 04:55:57.435355] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:50.370 [2024-11-27 04:55:57.435361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:50.370 [2024-11-27 04:55:57.435367] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:50.370 [2024-11-27 04:55:57.435374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:50.370 [2024-11-27 04:55:57.435380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:50.370 [2024-11-27 04:55:57.435385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.435391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:50.370 [2024-11-27 04:55:57.435397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:33:50.370 [2024-11-27 04:55:57.435402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.435465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.370 [2024-11-27 04:55:57.435471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:50.370 [2024-11-27 04:55:57.435477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:50.370 [2024-11-27 04:55:57.435482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.370 [2024-11-27 04:55:57.435558] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:50.370 [2024-11-27 04:55:57.435572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:50.370 [2024-11-27 04:55:57.435578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.370 [2024-11-27 04:55:57.435584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.370 [2024-11-27 04:55:57.435590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:50.370 [2024-11-27 04:55:57.435596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:50.370 [2024-11-27 04:55:57.435601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:50.370 [2024-11-27 04:55:57.435606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:50.371 [2024-11-27 04:55:57.435611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.371 [2024-11-27 04:55:57.435622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:50.371 [2024-11-27 04:55:57.435626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:50.371 [2024-11-27 04:55:57.435631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.371 [2024-11-27 04:55:57.435641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:50.371 [2024-11-27 04:55:57.435646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:50.371 [2024-11-27 04:55:57.435652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:50.371 [2024-11-27 04:55:57.435663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:50.371 [2024-11-27 04:55:57.435677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:50.371 [2024-11-27 04:55:57.435692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:50.371 [2024-11-27 04:55:57.435707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:50.371 [2024-11-27 04:55:57.435721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:50.371 [2024-11-27 04:55:57.435736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.371 [2024-11-27 04:55:57.435746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:50.371 [2024-11-27 04:55:57.435751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:50.371 [2024-11-27 04:55:57.435755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.371 [2024-11-27 04:55:57.435760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:50.371 [2024-11-27 04:55:57.435765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:50.371 [2024-11-27 04:55:57.435770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:50.371 [2024-11-27 04:55:57.435779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:50.371 [2024-11-27 04:55:57.435784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:50.371 [2024-11-27 04:55:57.435794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:50.371 [2024-11-27 04:55:57.435800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.371 [2024-11-27 04:55:57.435811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:50.371 [2024-11-27 04:55:57.435816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:50.371 [2024-11-27 04:55:57.435822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:50.371 [2024-11-27 04:55:57.435827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:50.371 [2024-11-27 04:55:57.435832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:50.371 [2024-11-27 04:55:57.435837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:50.371 [2024-11-27 04:55:57.435843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:50.371 [2024-11-27 04:55:57.435850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:50.371 [2024-11-27 04:55:57.435863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:50.371 [2024-11-27 04:55:57.435869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:50.371 [2024-11-27 04:55:57.435874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:50.371 [2024-11-27 04:55:57.435879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:50.371 [2024-11-27 04:55:57.435884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:50.371 [2024-11-27 04:55:57.435889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:50.371 [2024-11-27 04:55:57.435894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:50.371 [2024-11-27 04:55:57.435899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:50.371 [2024-11-27 04:55:57.435905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:50.371 [2024-11-27 04:55:57.435931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:50.371 [2024-11-27 04:55:57.435937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:50.371 [2024-11-27 04:55:57.435948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:50.371 [2024-11-27 04:55:57.435953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:50.371 [2024-11-27 04:55:57.435958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:50.371 [2024-11-27 04:55:57.435963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.435969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:50.371 [2024-11-27 04:55:57.435974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:33:50.371 [2024-11-27 04:55:57.435979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.456481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.456509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:50.371 [2024-11-27 04:55:57.456517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.470 ms 00:33:50.371 [2024-11-27 04:55:57.456525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.456584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.456590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:50.371 [2024-11-27 04:55:57.456596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:50.371 [2024-11-27 04:55:57.456602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.495263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.495295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.371 [2024-11-27 04:55:57.495304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.623 ms 00:33:50.371 [2024-11-27 04:55:57.495311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.495334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.495341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.371 [2024-11-27 04:55:57.495349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:50.371 [2024-11-27 04:55:57.495355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.495662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.495685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.371 [2024-11-27 04:55:57.495693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:33:50.371 [2024-11-27 04:55:57.495699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.371 [2024-11-27 04:55:57.495796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.371 [2024-11-27 04:55:57.495803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.372 [2024-11-27 04:55:57.495809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:33:50.372 [2024-11-27 04:55:57.495819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.506166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.506190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.372 [2024-11-27 04:55:57.506200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.332 ms 00:33:50.372 [2024-11-27 04:55:57.506205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.515826] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:50.372 [2024-11-27 04:55:57.515853] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:50.372 [2024-11-27 04:55:57.515862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.515868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:50.372 [2024-11-27 04:55:57.515875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.589 ms 00:33:50.372 [2024-11-27 04:55:57.515881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.534047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.534080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:50.372 [2024-11-27 04:55:57.534088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.136 ms 00:33:50.372 [2024-11-27 04:55:57.534094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.542915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.542941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:50.372 [2024-11-27 04:55:57.542948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:33:50.372 [2024-11-27 04:55:57.542953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.551493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.551520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:50.372 [2024-11-27 04:55:57.551527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:33:50.372 [2024-11-27 04:55:57.551533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.372 [2024-11-27 04:55:57.551978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.372 [2024-11-27 04:55:57.551999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.372 [2024-11-27 04:55:57.552008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:33:50.372 [2024-11-27 04:55:57.552013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.594773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.594811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:50.630 [2024-11-27 04:55:57.594824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.747 ms 00:33:50.630 [2024-11-27 04:55:57.594831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.602554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.630 [2024-11-27 04:55:57.604216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.604239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.630 [2024-11-27 04:55:57.604246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.352 ms 00:33:50.630 [2024-11-27 04:55:57.604252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.604304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.604312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:50.630 [2024-11-27 04:55:57.604321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:50.630 [2024-11-27 04:55:57.604327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.604367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.604374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.630 [2024-11-27 04:55:57.604380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:50.630 [2024-11-27 04:55:57.604385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.604399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.604406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.630 [2024-11-27 04:55:57.604412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.630 [2024-11-27 04:55:57.604418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.604442] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:50.630 [2024-11-27 04:55:57.604450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.604455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:50.630 [2024-11-27 04:55:57.604461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:50.630 [2024-11-27 04:55:57.604467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.621989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.622015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.630 [2024-11-27 04:55:57.622026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.509 ms 00:33:50.630 [2024-11-27 04:55:57.622033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.622094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.630 [2024-11-27 04:55:57.622102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.630 [2024-11-27 04:55:57.622108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:50.630 [2024-11-27 04:55:57.622114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.630 [2024-11-27 04:55:57.622869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 205.617 ms, result 0 00:33:51.573  [2024-11-27T04:56:00.170Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T04:56:01.115Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-27T04:56:02.058Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-27T04:56:03.003Z] Copying: 55/1024 [MB] (12 MBps) [2024-11-27T04:56:03.948Z] Copying: 69/1024 [MB] (14 MBps) [2024-11-27T04:56:04.894Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-27T04:56:05.837Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-27T04:56:06.779Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-27T04:56:08.166Z] Copying: 116/1024 [MB] (15 MBps) [2024-11-27T04:56:09.109Z] Copying: 129/1024 [MB] (13 MBps) [2024-11-27T04:56:10.053Z] Copying: 142/1024 [MB] (12 MBps) [2024-11-27T04:56:10.999Z] Copying: 153/1024 [MB] (11 MBps) [2024-11-27T04:56:11.944Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-27T04:56:12.888Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-27T04:56:13.831Z] Copying: 205/1024 [MB] (14 MBps) [2024-11-27T04:56:14.773Z] Copying: 225/1024 [MB] (19 MBps) [2024-11-27T04:56:16.163Z] Copying: 246/1024 [MB] (21 MBps) [2024-11-27T04:56:17.108Z] Copying: 264/1024 [MB] (17 MBps) [2024-11-27T04:56:18.052Z] Copying: 284/1024 [MB] (20 MBps) [2024-11-27T04:56:18.993Z] Copying: 300/1024 [MB] (15 MBps) [2024-11-27T04:56:19.938Z] Copying: 315/1024 [MB] (15 MBps) [2024-11-27T04:56:20.905Z] Copying: 335/1024 [MB] (19 MBps) [2024-11-27T04:56:21.921Z] Copying: 351/1024 [MB] (16 MBps) [2024-11-27T04:56:22.868Z] Copying: 387/1024 [MB] (36 MBps) [2024-11-27T04:56:23.810Z] Copying: 405/1024 [MB] (18 MBps) [2024-11-27T04:56:25.194Z] Copying: 418/1024 [MB] (12 MBps) [2024-11-27T04:56:25.796Z] Copying: 439/1024 [MB] (21 MBps) [2024-11-27T04:56:27.186Z] Copying: 459/1024 [MB] (19 MBps) [2024-11-27T04:56:28.131Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-27T04:56:29.074Z] Copying: 480/1024 [MB] (11 MBps) [2024-11-27T04:56:30.012Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-27T04:56:30.957Z] Copying: 516/1024 [MB] (24 MBps) [2024-11-27T04:56:31.903Z] Copying: 537/1024 [MB] (21 MBps) [2024-11-27T04:56:32.846Z] Copying: 558/1024 [MB] (20 MBps) [2024-11-27T04:56:33.789Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-27T04:56:35.175Z] Copying: 593/1024 [MB] (17 MBps) [2024-11-27T04:56:36.119Z] Copying: 615/1024 [MB] (22 MBps) [2024-11-27T04:56:37.065Z] Copying: 632/1024 [MB] (17 MBps) [2024-11-27T04:56:38.010Z] Copying: 651/1024 [MB] (18 MBps) [2024-11-27T04:56:38.953Z] Copying: 671/1024 [MB] (19 MBps) [2024-11-27T04:56:39.895Z] Copying: 688/1024 [MB] (16 MBps) [2024-11-27T04:56:40.838Z] Copying: 703/1024 [MB] (15 MBps) [2024-11-27T04:56:41.783Z] Copying: 720/1024 [MB] (16 MBps) [2024-11-27T04:56:43.170Z] Copying: 735/1024 [MB] (15 MBps) [2024-11-27T04:56:44.110Z] Copying: 754/1024 [MB] (19 MBps) [2024-11-27T04:56:45.054Z] Copying: 780/1024 [MB] (26 MBps) [2024-11-27T04:56:45.999Z] Copying: 793/1024 [MB] (12 MBps) [2024-11-27T04:56:46.946Z] Copying: 808/1024 [MB] (14 MBps) [2024-11-27T04:56:47.890Z] Copying: 820/1024 [MB] (12 MBps) [2024-11-27T04:56:48.836Z] Copying: 833/1024 [MB] (12 MBps) [2024-11-27T04:56:49.779Z] Copying: 854/1024 [MB] (20 MBps) [2024-11-27T04:56:51.166Z] Copying: 874/1024 [MB] (20 MBps) [2024-11-27T04:56:52.114Z] Copying: 889/1024 [MB] (14 MBps) [2024-11-27T04:56:53.113Z] Copying: 900/1024 [MB] (10 MBps) [2024-11-27T04:56:54.057Z] Copying: 913/1024 [MB] (13 MBps) [2024-11-27T04:56:54.997Z] Copying: 928/1024 [MB] (14 MBps) [2024-11-27T04:56:55.939Z] Copying: 948/1024 [MB] (19 MBps) [2024-11-27T04:56:56.883Z] Copying: 959/1024 [MB] (11 MBps) [2024-11-27T04:56:57.828Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-27T04:56:58.772Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-27T04:57:00.155Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-27T04:57:00.417Z] Copying: 1017/1024 [MB] (19 MBps) [2024-11-27T04:57:00.678Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 04:57:00.652179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.475 [2024-11-27 04:57:00.652247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:53.475 [2024-11-27 04:57:00.652264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:53.475 [2024-11-27 04:57:00.652273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.475 [2024-11-27 04:57:00.652299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:53.475 [2024-11-27 04:57:00.655808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.475 [2024-11-27 04:57:00.655863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:53.475 [2024-11-27 04:57:00.655876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.491 ms 00:34:53.475 [2024-11-27 04:57:00.655884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.475 [2024-11-27 04:57:00.656251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.475 [2024-11-27 04:57:00.656265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:53.475 [2024-11-27 04:57:00.656274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:34:53.475 [2024-11-27 04:57:00.656282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.475 [2024-11-27 04:57:00.659764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.475 [2024-11-27 04:57:00.659789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:53.475 [2024-11-27 04:57:00.659801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.467 ms 00:34:53.475 [2024-11-27 04:57:00.659814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.475 [2024-11-27 04:57:00.667552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.475 [2024-11-27 04:57:00.667599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:53.475 [2024-11-27 04:57:00.667611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.718 ms 00:34:53.475 [2024-11-27 04:57:00.667620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.696331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.696387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:53.737 [2024-11-27 04:57:00.696403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.628 ms 00:34:53.737 [2024-11-27 04:57:00.696411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.714736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.714787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:53.737 [2024-11-27 04:57:00.714801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.269 ms 00:34:53.737 [2024-11-27 04:57:00.714810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.714969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.714980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:53.737 [2024-11-27 04:57:00.714990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:53.737 [2024-11-27 04:57:00.714999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.741116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.741209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:53.737 [2024-11-27 04:57:00.741244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.101 ms 00:34:53.737 [2024-11-27 04:57:00.741265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.765289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.765430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:53.737 [2024-11-27 04:57:00.765446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.768 ms 00:34:53.737 [2024-11-27 04:57:00.765454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.788317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.788349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:53.737 [2024-11-27 04:57:00.788359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.833 ms 00:34:53.737 [2024-11-27 04:57:00.788366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.811268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.737 [2024-11-27 04:57:00.811299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:53.737 [2024-11-27 04:57:00.811309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.848 ms 00:34:53.737 [2024-11-27 04:57:00.811316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.737 [2024-11-27 04:57:00.811348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:53.737 [2024-11-27 04:57:00.811366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:53.737 [2024-11-27 04:57:00.811650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.811993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:53.738 [2024-11-27 04:57:00.812129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:53.738 [2024-11-27 04:57:00.812137] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1070351d-6946-4b84-87eb-caed8417ea7c 00:34:53.738 [2024-11-27 04:57:00.812145] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:53.738 [2024-11-27 04:57:00.812152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:53.738 [2024-11-27 04:57:00.812159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:53.738 [2024-11-27 04:57:00.812167] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:53.738 [2024-11-27 04:57:00.812180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:53.738 [2024-11-27 04:57:00.812187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:53.738 [2024-11-27 04:57:00.812194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:53.738 [2024-11-27 04:57:00.812201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:53.738 [2024-11-27 04:57:00.812207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:53.738 [2024-11-27 04:57:00.812214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.738 [2024-11-27 04:57:00.812222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:53.738 [2024-11-27 04:57:00.812231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:34:53.738 [2024-11-27 04:57:00.812240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.738 [2024-11-27 04:57:00.824597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.738 [2024-11-27 04:57:00.824627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:53.738 [2024-11-27 04:57:00.824637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.341 ms 00:34:53.738 [2024-11-27 04:57:00.824644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.738 [2024-11-27 04:57:00.824999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:53.738 [2024-11-27 04:57:00.825012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:53.738 [2024-11-27 04:57:00.825026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:34:53.738 [2024-11-27 04:57:00.825033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.738 [2024-11-27 04:57:00.858661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.738 [2024-11-27 04:57:00.858800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:53.738 [2024-11-27 04:57:00.858818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.738 [2024-11-27 04:57:00.858827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.738 [2024-11-27 04:57:00.858891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.738 [2024-11-27 04:57:00.858900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:53.738 [2024-11-27 04:57:00.858914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.738 [2024-11-27 04:57:00.858923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.738 [2024-11-27 04:57:00.858996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.738 [2024-11-27 04:57:00.859008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:53.738 [2024-11-27 04:57:00.859017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.738 [2024-11-27 04:57:00.859025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.739 [2024-11-27 04:57:00.859042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.739 [2024-11-27 04:57:00.859051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:53.739 [2024-11-27 04:57:00.859060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.739 [2024-11-27 04:57:00.859089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:00.940168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:00.940225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:53.999 [2024-11-27 04:57:00.940239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:00.940248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.009352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.009585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:53.999 [2024-11-27 04:57:01.009614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:01.009624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.009686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.009696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:53.999 [2024-11-27 04:57:01.009705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:01.009714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.009770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.009781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:53.999 [2024-11-27 04:57:01.009789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:01.009798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.009904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.009914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:53.999 [2024-11-27 04:57:01.009923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:01.009931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.009963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.009973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:53.999 [2024-11-27 04:57:01.009982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.999 [2024-11-27 04:57:01.009990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.999 [2024-11-27 04:57:01.010038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.999 [2024-11-27 04:57:01.010048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:54.000 [2024-11-27 04:57:01.010057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:54.000 [2024-11-27 04:57:01.010100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.000 [2024-11-27 04:57:01.010150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:54.000 [2024-11-27 04:57:01.010161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:54.000 [2024-11-27 04:57:01.010170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:54.000 [2024-11-27 04:57:01.010178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.000 [2024-11-27 04:57:01.010320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.105 ms, result 0 00:34:54.572 00:34:54.572 00:34:54.572 04:57:01 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:57.117 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:57.117 04:57:03 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:57.117 [2024-11-27 04:57:03.851221] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:34:57.117 [2024-11-27 04:57:03.851312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78987 ] 00:34:57.117 [2024-11-27 04:57:04.004781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.117 [2024-11-27 04:57:04.116579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.378 [2024-11-27 04:57:04.414681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:57.378 [2024-11-27 04:57:04.414771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:57.378 [2024-11-27 04:57:04.575937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.378 [2024-11-27 04:57:04.576007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:57.378 [2024-11-27 04:57:04.576029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:57.378 [2024-11-27 04:57:04.576042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.378 [2024-11-27 04:57:04.576153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.378 [2024-11-27 04:57:04.576177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:57.378 [2024-11-27 04:57:04.576190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:57.378 [2024-11-27 04:57:04.576199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.378 [2024-11-27 04:57:04.576225] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:57.378 [2024-11-27 04:57:04.576990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:57.378 [2024-11-27 04:57:04.577025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.378 [2024-11-27 04:57:04.577033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:57.378 [2024-11-27 04:57:04.577044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:34:57.378 [2024-11-27 04:57:04.577052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.578985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:57.640 [2024-11-27 04:57:04.593192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.593243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:57.640 [2024-11-27 04:57:04.593257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.211 ms 00:34:57.640 [2024-11-27 04:57:04.593266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.593367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.593380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:57.640 [2024-11-27 04:57:04.593390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:57.640 [2024-11-27 04:57:04.593398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.601597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.601638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:57.640 [2024-11-27 04:57:04.601649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.113 ms 00:34:57.640 [2024-11-27 04:57:04.601664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.601745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.601755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:57.640 [2024-11-27 04:57:04.601764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:34:57.640 [2024-11-27 04:57:04.601772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.601818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.601829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:57.640 [2024-11-27 04:57:04.601838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:57.640 [2024-11-27 04:57:04.601846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.601874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:57.640 [2024-11-27 04:57:04.605923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.605960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:57.640 [2024-11-27 04:57:04.605975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:34:57.640 [2024-11-27 04:57:04.605983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.606018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.606027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:57.640 [2024-11-27 04:57:04.606036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:57.640 [2024-11-27 04:57:04.606044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.606118] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:57.640 [2024-11-27 04:57:04.606142] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:57.640 [2024-11-27 04:57:04.606180] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:57.640 [2024-11-27 04:57:04.606199] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:57.640 [2024-11-27 04:57:04.606306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:57.640 [2024-11-27 04:57:04.606317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:57.640 [2024-11-27 04:57:04.606329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:57.640 [2024-11-27 04:57:04.606340] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606359] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:57.640 [2024-11-27 04:57:04.606367] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:57.640 [2024-11-27 04:57:04.606378] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:57.640 [2024-11-27 04:57:04.606386] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:57.640 [2024-11-27 04:57:04.606394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.606401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:57.640 [2024-11-27 04:57:04.606409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:34:57.640 [2024-11-27 04:57:04.606417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.606504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.640 [2024-11-27 04:57:04.606513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:57.640 [2024-11-27 04:57:04.606521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:57.640 [2024-11-27 04:57:04.606529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.640 [2024-11-27 04:57:04.606634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:57.640 [2024-11-27 04:57:04.606645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:57.640 [2024-11-27 04:57:04.606653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:57.640 [2024-11-27 04:57:04.606676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:57.640 [2024-11-27 04:57:04.606699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:57.640 [2024-11-27 04:57:04.606714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:57.640 [2024-11-27 04:57:04.606721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:57.640 [2024-11-27 04:57:04.606727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:57.640 [2024-11-27 04:57:04.606741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:57.640 [2024-11-27 04:57:04.606751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:57.640 [2024-11-27 04:57:04.606758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:57.640 [2024-11-27 04:57:04.606773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:57.640 [2024-11-27 04:57:04.606794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:57.640 [2024-11-27 04:57:04.606816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:57.640 [2024-11-27 04:57:04.606837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:57.640 [2024-11-27 04:57:04.606858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:57.640 [2024-11-27 04:57:04.606871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:57.640 [2024-11-27 04:57:04.606878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:57.640 [2024-11-27 04:57:04.606884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:57.640 [2024-11-27 04:57:04.606891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:57.640 [2024-11-27 04:57:04.606898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:57.640 [2024-11-27 04:57:04.606904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:57.641 [2024-11-27 04:57:04.606910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:57.641 [2024-11-27 04:57:04.606917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:57.641 [2024-11-27 04:57:04.606923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.641 [2024-11-27 04:57:04.606930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:57.641 [2024-11-27 04:57:04.606937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:57.641 [2024-11-27 04:57:04.606945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.641 [2024-11-27 04:57:04.606952] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:57.641 [2024-11-27 04:57:04.606960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:57.641 [2024-11-27 04:57:04.606967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:57.641 [2024-11-27 04:57:04.606976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:57.641 [2024-11-27 04:57:04.606984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:57.641 [2024-11-27 04:57:04.606991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:57.641 [2024-11-27 04:57:04.606999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:57.641 [2024-11-27 04:57:04.607006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:57.641 [2024-11-27 04:57:04.607012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:57.641 [2024-11-27 04:57:04.607019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:57.641 [2024-11-27 04:57:04.607027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:57.641 [2024-11-27 04:57:04.607037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:57.641 [2024-11-27 04:57:04.607055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:57.641 [2024-11-27 04:57:04.607076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:57.641 [2024-11-27 04:57:04.607084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:57.641 [2024-11-27 04:57:04.607091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:57.641 [2024-11-27 04:57:04.607098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:57.641 [2024-11-27 04:57:04.607106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:57.641 [2024-11-27 04:57:04.607112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:57.641 [2024-11-27 04:57:04.607119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:57.641 [2024-11-27 04:57:04.607126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:57.641 [2024-11-27 04:57:04.607160] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:57.641 [2024-11-27 04:57:04.607169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:57.641 [2024-11-27 04:57:04.607187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:57.641 [2024-11-27 04:57:04.607195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:57.641 [2024-11-27 04:57:04.607204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:57.641 [2024-11-27 04:57:04.607212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.607220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:57.641 [2024-11-27 04:57:04.607228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:34:57.641 [2024-11-27 04:57:04.607238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.639370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.639549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:57.641 [2024-11-27 04:57:04.639625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.088 ms 00:34:57.641 [2024-11-27 04:57:04.639650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.639756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.639778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:57.641 [2024-11-27 04:57:04.639799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:57.641 [2024-11-27 04:57:04.639823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.687539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.687742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:57.641 [2024-11-27 04:57:04.687933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.646 ms 00:34:57.641 [2024-11-27 04:57:04.687976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.688041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.688105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:57.641 [2024-11-27 04:57:04.688128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:57.641 [2024-11-27 04:57:04.688148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.688755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.688909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:57.641 [2024-11-27 04:57:04.688970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:34:57.641 [2024-11-27 04:57:04.688993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.689191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.689360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:57.641 [2024-11-27 04:57:04.689408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:34:57.641 [2024-11-27 04:57:04.689427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.705038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.705227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:57.641 [2024-11-27 04:57:04.705287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.577 ms 00:34:57.641 [2024-11-27 04:57:04.705336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.719532] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:57.641 [2024-11-27 04:57:04.719702] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:57.641 [2024-11-27 04:57:04.719768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.719790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:57.641 [2024-11-27 04:57:04.719811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.306 ms 00:34:57.641 [2024-11-27 04:57:04.719829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.745582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.745737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:57.641 [2024-11-27 04:57:04.745797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.700 ms 00:34:57.641 [2024-11-27 04:57:04.745820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.758081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.758180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:57.641 [2024-11-27 04:57:04.758227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:34:57.641 [2024-11-27 04:57:04.758249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.769661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.769760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:57.641 [2024-11-27 04:57:04.769806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.373 ms 00:34:57.641 [2024-11-27 04:57:04.769828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.770433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.770478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:57.641 [2024-11-27 04:57:04.770641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:34:57.641 [2024-11-27 04:57:04.770671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.825117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.825263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:57.641 [2024-11-27 04:57:04.825327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.414 ms 00:34:57.641 [2024-11-27 04:57:04.825358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.641 [2024-11-27 04:57:04.835481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:57.641 [2024-11-27 04:57:04.837721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.641 [2024-11-27 04:57:04.837827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:57.641 [2024-11-27 04:57:04.837877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:34:57.641 [2024-11-27 04:57:04.837899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.642 [2024-11-27 04:57:04.837985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.642 [2024-11-27 04:57:04.838013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:57.642 [2024-11-27 04:57:04.838036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:57.642 [2024-11-27 04:57:04.838054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.642 [2024-11-27 04:57:04.838152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.642 [2024-11-27 04:57:04.838272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:57.642 [2024-11-27 04:57:04.838293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:57.642 [2024-11-27 04:57:04.838311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.642 [2024-11-27 04:57:04.838343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.642 [2024-11-27 04:57:04.838364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:57.642 [2024-11-27 04:57:04.838426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:57.642 [2024-11-27 04:57:04.838455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.642 [2024-11-27 04:57:04.838498] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:57.642 [2024-11-27 04:57:04.838520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.642 [2024-11-27 04:57:04.838539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:57.642 [2024-11-27 04:57:04.838558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:57.642 [2024-11-27 04:57:04.838576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.902 [2024-11-27 04:57:04.862484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.902 [2024-11-27 04:57:04.862603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:57.902 [2024-11-27 04:57:04.862660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.877 ms 00:34:57.903 [2024-11-27 04:57:04.862671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.903 [2024-11-27 04:57:04.862737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:57.903 [2024-11-27 04:57:04.862747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:57.903 [2024-11-27 04:57:04.862756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:57.903 [2024-11-27 04:57:04.862764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:57.903 [2024-11-27 04:57:04.863717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.373 ms, result 0 00:34:58.843  [2024-11-27T04:57:06.989Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-27T04:57:07.924Z] Copying: 34/1024 [MB] (14 MBps) [2024-11-27T04:57:09.306Z] Copying: 66/1024 [MB] (32 MBps) [2024-11-27T04:57:10.248Z] Copying: 86/1024 [MB] (19 MBps) [2024-11-27T04:57:11.191Z] Copying: 114/1024 [MB] (28 MBps) [2024-11-27T04:57:12.138Z] Copying: 130/1024 [MB] (16 MBps) [2024-11-27T04:57:13.072Z] Copying: 146/1024 [MB] (15 MBps) [2024-11-27T04:57:14.004Z] Copying: 183/1024 [MB] (36 MBps) [2024-11-27T04:57:14.936Z] Copying: 233/1024 [MB] (50 MBps) [2024-11-27T04:57:16.307Z] Copying: 282/1024 [MB] (49 MBps) [2024-11-27T04:57:17.246Z] Copying: 332/1024 [MB] (49 MBps) [2024-11-27T04:57:18.186Z] Copying: 373/1024 [MB] (41 MBps) [2024-11-27T04:57:19.125Z] Copying: 387/1024 [MB] (13 MBps) [2024-11-27T04:57:20.067Z] Copying: 408/1024 [MB] (21 MBps) [2024-11-27T04:57:21.011Z] Copying: 427/1024 [MB] (18 MBps) [2024-11-27T04:57:21.950Z] Copying: 442/1024 [MB] (14 MBps) [2024-11-27T04:57:22.883Z] Copying: 467/1024 [MB] (25 MBps) [2024-11-27T04:57:24.325Z] Copying: 517/1024 [MB] (49 MBps) [2024-11-27T04:57:24.890Z] Copying: 567/1024 [MB] (50 MBps) [2024-11-27T04:57:26.274Z] Copying: 617/1024 [MB] (50 MBps) [2024-11-27T04:57:27.218Z] Copying: 647/1024 [MB] (30 MBps) [2024-11-27T04:57:28.160Z] Copying: 661/1024 [MB] (13 MBps) [2024-11-27T04:57:29.103Z] Copying: 671/1024 [MB] (10 MBps) [2024-11-27T04:57:30.063Z] Copying: 687/1024 [MB] (15 MBps) [2024-11-27T04:57:31.006Z] Copying: 702/1024 [MB] (15 MBps) [2024-11-27T04:57:31.951Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-27T04:57:32.895Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-27T04:57:34.282Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-27T04:57:35.228Z] Copying: 744/1024 [MB] (10 MBps) [2024-11-27T04:57:36.175Z] Copying: 754/1024 [MB] (10 MBps) [2024-11-27T04:57:37.119Z] Copying: 765/1024 [MB] (10 MBps) [2024-11-27T04:57:38.062Z] Copying: 775/1024 [MB] (10 MBps) [2024-11-27T04:57:39.006Z] Copying: 793/1024 [MB] (17 MBps) [2024-11-27T04:57:39.947Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-27T04:57:40.891Z] Copying: 827/1024 [MB] (13 MBps) [2024-11-27T04:57:42.279Z] Copying: 842/1024 [MB] (14 MBps) [2024-11-27T04:57:43.218Z] Copying: 860/1024 [MB] (18 MBps) [2024-11-27T04:57:44.160Z] Copying: 879/1024 [MB] (18 MBps) [2024-11-27T04:57:45.104Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-27T04:57:46.047Z] Copying: 923/1024 [MB] (21 MBps) [2024-11-27T04:57:46.989Z] Copying: 944/1024 [MB] (20 MBps) [2024-11-27T04:57:47.932Z] Copying: 965/1024 [MB] (21 MBps) [2024-11-27T04:57:49.327Z] Copying: 986/1024 [MB] (20 MBps) [2024-11-27T04:57:49.900Z] Copying: 1005/1024 [MB] (19 MBps) [2024-11-27T04:57:50.846Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-27T04:57:50.846Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-27 04:57:50.709312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.709826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:43.643 [2024-11-27 04:57:50.709935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:43.643 [2024-11-27 04:57:50.709963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.713402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:43.643 [2024-11-27 04:57:50.717161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.717335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:43.643 [2024-11-27 04:57:50.717412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:35:43.643 [2024-11-27 04:57:50.717437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.728581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.728747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:43.643 [2024-11-27 04:57:50.728776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.288 ms 00:35:43.643 [2024-11-27 04:57:50.728785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.750344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.750520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:43.643 [2024-11-27 04:57:50.750541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.537 ms 00:35:43.643 [2024-11-27 04:57:50.750551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.757194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.757363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:43.643 [2024-11-27 04:57:50.757382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:35:43.643 [2024-11-27 04:57:50.757399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.783892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.783947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:43.643 [2024-11-27 04:57:50.783961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.447 ms 00:35:43.643 [2024-11-27 04:57:50.783969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.643 [2024-11-27 04:57:50.800019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.643 [2024-11-27 04:57:50.800214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:43.643 [2024-11-27 04:57:50.800236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.003 ms 00:35:43.643 [2024-11-27 04:57:50.800246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:50.952298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.906 [2024-11-27 04:57:50.952363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:43.906 [2024-11-27 04:57:50.952375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 152.009 ms 00:35:43.906 [2024-11-27 04:57:50.952384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:50.978247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.906 [2024-11-27 04:57:50.978294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:43.906 [2024-11-27 04:57:50.978306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.840 ms 00:35:43.906 [2024-11-27 04:57:50.978313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:51.003193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.906 [2024-11-27 04:57:51.003357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:43.906 [2024-11-27 04:57:51.003376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.836 ms 00:35:43.906 [2024-11-27 04:57:51.003384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:51.027761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.906 [2024-11-27 04:57:51.027807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:43.906 [2024-11-27 04:57:51.027820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.341 ms 00:35:43.906 [2024-11-27 04:57:51.027828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:51.052175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.906 [2024-11-27 04:57:51.052220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:43.906 [2024-11-27 04:57:51.052232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.277 ms 00:35:43.906 [2024-11-27 04:57:51.052240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.906 [2024-11-27 04:57:51.052281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:43.906 [2024-11-27 04:57:51.052296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105984 / 261120 wr_cnt: 1 state: open 00:35:43.906 [2024-11-27 04:57:51.052308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:43.906 [2024-11-27 04:57:51.052380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.052994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:43.907 [2024-11-27 04:57:51.053125] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:43.908 [2024-11-27 04:57:51.053134] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1070351d-6946-4b84-87eb-caed8417ea7c 00:35:43.908 [2024-11-27 04:57:51.053143] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105984 00:35:43.908 [2024-11-27 04:57:51.053152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106944 00:35:43.908 [2024-11-27 04:57:51.053160] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105984 00:35:43.908 [2024-11-27 04:57:51.053175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:35:43.908 [2024-11-27 04:57:51.053194] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:43.908 [2024-11-27 04:57:51.053203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:43.908 [2024-11-27 04:57:51.053212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:43.908 [2024-11-27 04:57:51.053219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:43.908 [2024-11-27 04:57:51.053226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:43.908 [2024-11-27 04:57:51.053234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.908 [2024-11-27 04:57:51.053243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:43.908 [2024-11-27 04:57:51.053252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:35:43.908 [2024-11-27 04:57:51.053260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.066685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.908 [2024-11-27 04:57:51.066853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:43.908 [2024-11-27 04:57:51.066871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.407 ms 00:35:43.908 [2024-11-27 04:57:51.066879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.067411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:43.908 [2024-11-27 04:57:51.067449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:43.908 [2024-11-27 04:57:51.067462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:35:43.908 [2024-11-27 04:57:51.067470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.103971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:43.908 [2024-11-27 04:57:51.104020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:43.908 [2024-11-27 04:57:51.104031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:43.908 [2024-11-27 04:57:51.104039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.104127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:43.908 [2024-11-27 04:57:51.104137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:43.908 [2024-11-27 04:57:51.104146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:43.908 [2024-11-27 04:57:51.104155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.104229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:43.908 [2024-11-27 04:57:51.104245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:43.908 [2024-11-27 04:57:51.104254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:43.908 [2024-11-27 04:57:51.104262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:43.908 [2024-11-27 04:57:51.104276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:43.908 [2024-11-27 04:57:51.104285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:43.908 [2024-11-27 04:57:51.104294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:43.908 [2024-11-27 04:57:51.104301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.188837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.189054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:44.169 [2024-11-27 04:57:51.189097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.189106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.259081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.259265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:44.169 [2024-11-27 04:57:51.259284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.259293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.259356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.259366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:44.169 [2024-11-27 04:57:51.259382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.259390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.259447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.259457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:44.169 [2024-11-27 04:57:51.259466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.259474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.259578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.259590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:44.169 [2024-11-27 04:57:51.259598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.259610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.169 [2024-11-27 04:57:51.259644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.169 [2024-11-27 04:57:51.259654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:44.169 [2024-11-27 04:57:51.259663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.169 [2024-11-27 04:57:51.259671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.170 [2024-11-27 04:57:51.259716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.170 [2024-11-27 04:57:51.259725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:44.170 [2024-11-27 04:57:51.259733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.170 [2024-11-27 04:57:51.259744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.170 [2024-11-27 04:57:51.259793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:44.170 [2024-11-27 04:57:51.259805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:44.170 [2024-11-27 04:57:51.259813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:44.170 [2024-11-27 04:57:51.259821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.170 [2024-11-27 04:57:51.259960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.186 ms, result 0 00:35:45.557 00:35:45.557 00:35:45.557 04:57:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:45.818 [2024-11-27 04:57:52.813082] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:35:45.818 [2024-11-27 04:57:52.813469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79482 ] 00:35:45.818 [2024-11-27 04:57:52.976402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.080 [2024-11-27 04:57:53.099572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.342 [2024-11-27 04:57:53.397420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:46.342 [2024-11-27 04:57:53.397504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:46.605 [2024-11-27 04:57:53.558261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.558484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:46.605 [2024-11-27 04:57:53.558508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:46.605 [2024-11-27 04:57:53.558518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.558590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.558605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:46.605 [2024-11-27 04:57:53.558614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:35:46.605 [2024-11-27 04:57:53.558622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.558645] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:46.605 [2024-11-27 04:57:53.559364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:46.605 [2024-11-27 04:57:53.559384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.559393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:46.605 [2024-11-27 04:57:53.559403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:35:46.605 [2024-11-27 04:57:53.559411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.561208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:46.605 [2024-11-27 04:57:53.575216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.575281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:46.605 [2024-11-27 04:57:53.575295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.012 ms 00:35:46.605 [2024-11-27 04:57:53.575303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.575382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.575393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:46.605 [2024-11-27 04:57:53.575401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:35:46.605 [2024-11-27 04:57:53.575409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.583415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.583456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:46.605 [2024-11-27 04:57:53.583474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.930 ms 00:35:46.605 [2024-11-27 04:57:53.583482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.583562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.583571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:46.605 [2024-11-27 04:57:53.583580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:35:46.605 [2024-11-27 04:57:53.583588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.605 [2024-11-27 04:57:53.583631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.605 [2024-11-27 04:57:53.583640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:46.605 [2024-11-27 04:57:53.583649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:46.606 [2024-11-27 04:57:53.583660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.606 [2024-11-27 04:57:53.583682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:46.606 [2024-11-27 04:57:53.587737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.606 [2024-11-27 04:57:53.587938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:46.606 [2024-11-27 04:57:53.587958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:35:46.606 [2024-11-27 04:57:53.587967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.606 [2024-11-27 04:57:53.588006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.606 [2024-11-27 04:57:53.588016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:46.606 [2024-11-27 04:57:53.588026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:46.606 [2024-11-27 04:57:53.588033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.606 [2024-11-27 04:57:53.588105] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:46.606 [2024-11-27 04:57:53.588133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:46.606 [2024-11-27 04:57:53.588173] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:46.606 [2024-11-27 04:57:53.588191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:46.606 [2024-11-27 04:57:53.588299] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:46.606 [2024-11-27 04:57:53.588311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:46.606 [2024-11-27 04:57:53.588322] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:46.606 [2024-11-27 04:57:53.588334] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588343] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588352] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:46.606 [2024-11-27 04:57:53.588360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:46.606 [2024-11-27 04:57:53.588372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:46.606 [2024-11-27 04:57:53.588380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:46.606 [2024-11-27 04:57:53.588389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.606 [2024-11-27 04:57:53.588396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:46.606 [2024-11-27 04:57:53.588404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:35:46.606 [2024-11-27 04:57:53.588412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.606 [2024-11-27 04:57:53.588500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.606 [2024-11-27 04:57:53.588510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:46.606 [2024-11-27 04:57:53.588517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:35:46.606 [2024-11-27 04:57:53.588524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.606 [2024-11-27 04:57:53.588631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:46.606 [2024-11-27 04:57:53.588642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:46.606 [2024-11-27 04:57:53.588650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:46.606 [2024-11-27 04:57:53.588674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:46.606 [2024-11-27 04:57:53.588697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:46.606 [2024-11-27 04:57:53.588712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:46.606 [2024-11-27 04:57:53.588720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:46.606 [2024-11-27 04:57:53.588727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:46.606 [2024-11-27 04:57:53.588741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:46.606 [2024-11-27 04:57:53.588750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:46.606 [2024-11-27 04:57:53.588758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:46.606 [2024-11-27 04:57:53.588771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:46.606 [2024-11-27 04:57:53.588791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:46.606 [2024-11-27 04:57:53.588812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:46.606 [2024-11-27 04:57:53.588832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:46.606 [2024-11-27 04:57:53.588853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:46.606 [2024-11-27 04:57:53.588873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:46.606 [2024-11-27 04:57:53.588886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:46.606 [2024-11-27 04:57:53.588893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:46.606 [2024-11-27 04:57:53.588899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:46.606 [2024-11-27 04:57:53.588906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:46.606 [2024-11-27 04:57:53.588912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:46.606 [2024-11-27 04:57:53.588919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:46.606 [2024-11-27 04:57:53.588932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:46.606 [2024-11-27 04:57:53.588939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588947] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:46.606 [2024-11-27 04:57:53.588955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:46.606 [2024-11-27 04:57:53.588963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:46.606 [2024-11-27 04:57:53.588971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:46.606 [2024-11-27 04:57:53.588979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:46.606 [2024-11-27 04:57:53.588986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:46.606 [2024-11-27 04:57:53.588992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:46.606 [2024-11-27 04:57:53.588999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:46.606 [2024-11-27 04:57:53.589006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:46.606 [2024-11-27 04:57:53.589013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:46.606 [2024-11-27 04:57:53.589021] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:46.606 [2024-11-27 04:57:53.589033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:46.606 [2024-11-27 04:57:53.589041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:46.606 [2024-11-27 04:57:53.589048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:46.606 [2024-11-27 04:57:53.589055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:46.606 [2024-11-27 04:57:53.589085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:46.606 [2024-11-27 04:57:53.589093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:46.606 [2024-11-27 04:57:53.589101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:46.606 [2024-11-27 04:57:53.589109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:46.606 [2024-11-27 04:57:53.589117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:46.606 [2024-11-27 04:57:53.589125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:46.606 [2024-11-27 04:57:53.589132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:46.606 [2024-11-27 04:57:53.589140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:46.606 [2024-11-27 04:57:53.589147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:46.606 [2024-11-27 04:57:53.589155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:46.606 [2024-11-27 04:57:53.589162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:46.607 [2024-11-27 04:57:53.589169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:46.607 [2024-11-27 04:57:53.589177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:46.607 [2024-11-27 04:57:53.589186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:46.607 [2024-11-27 04:57:53.589194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:46.607 [2024-11-27 04:57:53.589202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:46.607 [2024-11-27 04:57:53.589209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:46.607 [2024-11-27 04:57:53.589218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.589227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:46.607 [2024-11-27 04:57:53.589235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:35:46.607 [2024-11-27 04:57:53.589244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.621049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.621250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:46.607 [2024-11-27 04:57:53.621657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.760 ms 00:35:46.607 [2024-11-27 04:57:53.621711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.621830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.621855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:46.607 [2024-11-27 04:57:53.621876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:35:46.607 [2024-11-27 04:57:53.621981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.671314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.671522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:46.607 [2024-11-27 04:57:53.671712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.226 ms 00:35:46.607 [2024-11-27 04:57:53.671754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.671816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.671847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:46.607 [2024-11-27 04:57:53.671868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:46.607 [2024-11-27 04:57:53.671888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.672480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.672732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:46.607 [2024-11-27 04:57:53.672808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:35:46.607 [2024-11-27 04:57:53.672832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.673006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.673035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:46.607 [2024-11-27 04:57:53.673114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:35:46.607 [2024-11-27 04:57:53.673139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.688706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.688858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:46.607 [2024-11-27 04:57:53.688914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.531 ms 00:35:46.607 [2024-11-27 04:57:53.688936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.703368] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:46.607 [2024-11-27 04:57:53.703545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:46.607 [2024-11-27 04:57:53.703611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.703633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:46.607 [2024-11-27 04:57:53.703654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.550 ms 00:35:46.607 [2024-11-27 04:57:53.703673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.729346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.729506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:46.607 [2024-11-27 04:57:53.729566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.621 ms 00:35:46.607 [2024-11-27 04:57:53.729589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.742289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.742449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:46.607 [2024-11-27 04:57:53.742505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.573 ms 00:35:46.607 [2024-11-27 04:57:53.742527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.755163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.755316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:46.607 [2024-11-27 04:57:53.755372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.585 ms 00:35:46.607 [2024-11-27 04:57:53.755395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.607 [2024-11-27 04:57:53.756031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.607 [2024-11-27 04:57:53.756181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:46.607 [2024-11-27 04:57:53.756238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:35:46.607 [2024-11-27 04:57:53.756261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.820351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.820575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:46.869 [2024-11-27 04:57:53.820637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.053 ms 00:35:46.869 [2024-11-27 04:57:53.820661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.832145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:46.869 [2024-11-27 04:57:53.835223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.835370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:46.869 [2024-11-27 04:57:53.835388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:35:46.869 [2024-11-27 04:57:53.835398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.835492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.835504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:46.869 [2024-11-27 04:57:53.835516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:35:46.869 [2024-11-27 04:57:53.835525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.837257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.837421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:46.869 [2024-11-27 04:57:53.837441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:35:46.869 [2024-11-27 04:57:53.837449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.837482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.837490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:46.869 [2024-11-27 04:57:53.837500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:46.869 [2024-11-27 04:57:53.837515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.837555] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:46.869 [2024-11-27 04:57:53.837567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.837576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:46.869 [2024-11-27 04:57:53.837585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:46.869 [2024-11-27 04:57:53.837593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.862597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.862646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:46.869 [2024-11-27 04:57:53.862665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.984 ms 00:35:46.869 [2024-11-27 04:57:53.862673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.862762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:46.869 [2024-11-27 04:57:53.862773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:46.869 [2024-11-27 04:57:53.862783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:35:46.869 [2024-11-27 04:57:53.862791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:46.869 [2024-11-27 04:57:53.864034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.276 ms, result 0 00:35:48.311  [2024-11-27T04:57:56.087Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-27T04:57:57.467Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-27T04:57:58.411Z] Copying: 37/1024 [MB] (16 MBps) [2024-11-27T04:57:59.367Z] Copying: 51/1024 [MB] (13 MBps) [2024-11-27T04:58:00.314Z] Copying: 63/1024 [MB] (12 MBps) [2024-11-27T04:58:01.255Z] Copying: 75/1024 [MB] (11 MBps) [2024-11-27T04:58:02.201Z] Copying: 93/1024 [MB] (17 MBps) [2024-11-27T04:58:03.144Z] Copying: 112/1024 [MB] (18 MBps) [2024-11-27T04:58:04.088Z] Copying: 126/1024 [MB] (14 MBps) [2024-11-27T04:58:05.469Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-27T04:58:06.412Z] Copying: 159/1024 [MB] (18 MBps) [2024-11-27T04:58:07.357Z] Copying: 176/1024 [MB] (17 MBps) [2024-11-27T04:58:08.302Z] Copying: 190/1024 [MB] (13 MBps) [2024-11-27T04:58:09.248Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-27T04:58:10.190Z] Copying: 221/1024 [MB] (16 MBps) [2024-11-27T04:58:11.136Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-27T04:58:12.081Z] Copying: 251/1024 [MB] (11 MBps) [2024-11-27T04:58:13.469Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-27T04:58:14.413Z] Copying: 281/1024 [MB] (18 MBps) [2024-11-27T04:58:15.359Z] Copying: 293/1024 [MB] (12 MBps) [2024-11-27T04:58:16.305Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-27T04:58:17.248Z] Copying: 319/1024 [MB] (15 MBps) [2024-11-27T04:58:18.190Z] Copying: 337/1024 [MB] (17 MBps) [2024-11-27T04:58:19.136Z] Copying: 355/1024 [MB] (17 MBps) [2024-11-27T04:58:20.081Z] Copying: 369/1024 [MB] (14 MBps) [2024-11-27T04:58:21.467Z] Copying: 388/1024 [MB] (18 MBps) [2024-11-27T04:58:22.409Z] Copying: 403/1024 [MB] (15 MBps) [2024-11-27T04:58:23.356Z] Copying: 418/1024 [MB] (14 MBps) [2024-11-27T04:58:24.301Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-27T04:58:25.246Z] Copying: 450/1024 [MB] (18 MBps) [2024-11-27T04:58:26.191Z] Copying: 467/1024 [MB] (16 MBps) [2024-11-27T04:58:27.229Z] Copying: 486/1024 [MB] (18 MBps) [2024-11-27T04:58:28.178Z] Copying: 501/1024 [MB] (15 MBps) [2024-11-27T04:58:29.121Z] Copying: 514/1024 [MB] (12 MBps) [2024-11-27T04:58:30.074Z] Copying: 534/1024 [MB] (19 MBps) [2024-11-27T04:58:31.464Z] Copying: 554/1024 [MB] (20 MBps) [2024-11-27T04:58:32.409Z] Copying: 572/1024 [MB] (17 MBps) [2024-11-27T04:58:33.355Z] Copying: 584/1024 [MB] (12 MBps) [2024-11-27T04:58:34.300Z] Copying: 596/1024 [MB] (12 MBps) [2024-11-27T04:58:35.246Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-27T04:58:36.193Z] Copying: 623/1024 [MB] (14 MBps) [2024-11-27T04:58:37.138Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-27T04:58:38.084Z] Copying: 666/1024 [MB] (24 MBps) [2024-11-27T04:58:39.474Z] Copying: 680/1024 [MB] (14 MBps) [2024-11-27T04:58:40.416Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-27T04:58:41.360Z] Copying: 709/1024 [MB] (18 MBps) [2024-11-27T04:58:42.305Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-27T04:58:43.249Z] Copying: 740/1024 [MB] (15 MBps) [2024-11-27T04:58:44.194Z] Copying: 754/1024 [MB] (13 MBps) [2024-11-27T04:58:45.138Z] Copying: 768/1024 [MB] (14 MBps) [2024-11-27T04:58:46.083Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-27T04:58:47.473Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-27T04:58:48.418Z] Copying: 799/1024 [MB] (10 MBps) [2024-11-27T04:58:49.362Z] Copying: 813/1024 [MB] (13 MBps) [2024-11-27T04:58:50.302Z] Copying: 832/1024 [MB] (19 MBps) [2024-11-27T04:58:51.243Z] Copying: 851/1024 [MB] (18 MBps) [2024-11-27T04:58:52.180Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-27T04:58:53.122Z] Copying: 881/1024 [MB] (18 MBps) [2024-11-27T04:58:54.063Z] Copying: 902/1024 [MB] (20 MBps) [2024-11-27T04:58:55.448Z] Copying: 918/1024 [MB] (16 MBps) [2024-11-27T04:58:56.394Z] Copying: 937/1024 [MB] (18 MBps) [2024-11-27T04:58:57.338Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-27T04:58:58.285Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-27T04:58:59.308Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-27T04:59:00.251Z] Copying: 982/1024 [MB] (13 MBps) [2024-11-27T04:59:01.198Z] Copying: 994/1024 [MB] (12 MBps) [2024-11-27T04:59:02.144Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-27T04:59:03.090Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-27T04:59:03.090Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 04:59:02.896819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.896894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:55.887 [2024-11-27 04:59:02.896921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:55.887 [2024-11-27 04:59:02.896930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.896954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:55.887 [2024-11-27 04:59:02.900387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.900434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:55.887 [2024-11-27 04:59:02.900446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:36:55.887 [2024-11-27 04:59:02.900455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.900680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.900691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:55.887 [2024-11-27 04:59:02.900704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:36:55.887 [2024-11-27 04:59:02.900712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.906701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.906750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:55.887 [2024-11-27 04:59:02.906761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.973 ms 00:36:55.887 [2024-11-27 04:59:02.906770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.913410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.913457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:55.887 [2024-11-27 04:59:02.913469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.598 ms 00:36:55.887 [2024-11-27 04:59:02.913484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.940200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.940261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:55.887 [2024-11-27 04:59:02.940274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.656 ms 00:36:55.887 [2024-11-27 04:59:02.940282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:55.887 [2024-11-27 04:59:02.956506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:55.887 [2024-11-27 04:59:02.956558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:55.887 [2024-11-27 04:59:02.956571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.175 ms 00:36:55.887 [2024-11-27 04:59:02.956579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.149 [2024-11-27 04:59:03.329135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.149 [2024-11-27 04:59:03.329201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:56.149 [2024-11-27 04:59:03.329217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 372.499 ms 00:36:56.149 [2024-11-27 04:59:03.329226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.413 [2024-11-27 04:59:03.354962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.413 [2024-11-27 04:59:03.355018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:56.413 [2024-11-27 04:59:03.355031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.709 ms 00:36:56.413 [2024-11-27 04:59:03.355039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.413 [2024-11-27 04:59:03.381283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.413 [2024-11-27 04:59:03.381344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:56.413 [2024-11-27 04:59:03.381357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.174 ms 00:36:56.413 [2024-11-27 04:59:03.381364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.413 [2024-11-27 04:59:03.406663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.413 [2024-11-27 04:59:03.406709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:56.413 [2024-11-27 04:59:03.406721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.251 ms 00:36:56.414 [2024-11-27 04:59:03.406729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.414 [2024-11-27 04:59:03.431978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.414 [2024-11-27 04:59:03.432027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:56.414 [2024-11-27 04:59:03.432039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:36:56.414 [2024-11-27 04:59:03.432046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.414 [2024-11-27 04:59:03.432110] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:56.414 [2024-11-27 04:59:03.432126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:56.414 [2024-11-27 04:59:03.432137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:56.414 [2024-11-27 04:59:03.432747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:56.415 [2024-11-27 04:59:03.432902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:56.415 [2024-11-27 04:59:03.432910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1070351d-6946-4b84-87eb-caed8417ea7c 00:36:56.415 [2024-11-27 04:59:03.432918] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:56.415 [2024-11-27 04:59:03.432926] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26048 00:36:56.415 [2024-11-27 04:59:03.432933] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25088 00:36:56.415 [2024-11-27 04:59:03.432949] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0383 00:36:56.415 [2024-11-27 04:59:03.432956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:56.415 [2024-11-27 04:59:03.432971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:56.415 [2024-11-27 04:59:03.432983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:56.415 [2024-11-27 04:59:03.432990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:56.415 [2024-11-27 04:59:03.432996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:56.415 [2024-11-27 04:59:03.433004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.415 [2024-11-27 04:59:03.433011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:56.415 [2024-11-27 04:59:03.433020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:36:56.415 [2024-11-27 04:59:03.433028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.447009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.415 [2024-11-27 04:59:03.447074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:56.415 [2024-11-27 04:59:03.447086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.944 ms 00:36:56.415 [2024-11-27 04:59:03.447094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.447478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:56.415 [2024-11-27 04:59:03.447497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:56.415 [2024-11-27 04:59:03.447507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:36:56.415 [2024-11-27 04:59:03.447514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.484396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.415 [2024-11-27 04:59:03.484451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:56.415 [2024-11-27 04:59:03.484465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.415 [2024-11-27 04:59:03.484474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.484548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.415 [2024-11-27 04:59:03.484558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:56.415 [2024-11-27 04:59:03.484569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.415 [2024-11-27 04:59:03.484578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.484667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.415 [2024-11-27 04:59:03.484683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:56.415 [2024-11-27 04:59:03.484693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.415 [2024-11-27 04:59:03.484702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.484719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.415 [2024-11-27 04:59:03.484728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:56.415 [2024-11-27 04:59:03.484737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.415 [2024-11-27 04:59:03.484745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.415 [2024-11-27 04:59:03.571045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.415 [2024-11-27 04:59:03.571132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:56.415 [2024-11-27 04:59:03.571146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.415 [2024-11-27 04:59:03.571155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.641618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.641683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:56.677 [2024-11-27 04:59:03.641697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.641706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.641767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.641777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:56.677 [2024-11-27 04:59:03.641793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.641802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.641866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.641877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:56.677 [2024-11-27 04:59:03.641886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.641894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.641994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.642006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:56.677 [2024-11-27 04:59:03.642015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.642026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.642063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.642100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:56.677 [2024-11-27 04:59:03.642108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.642116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.642161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.642170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:56.677 [2024-11-27 04:59:03.642180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.642191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.642240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:56.677 [2024-11-27 04:59:03.642251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:56.677 [2024-11-27 04:59:03.642260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:56.677 [2024-11-27 04:59:03.642268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:56.677 [2024-11-27 04:59:03.642404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 745.544 ms, result 0 00:36:57.620 00:36:57.620 00:36:57.620 04:59:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:59.537 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:59.537 04:59:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:59.537 04:59:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:36:59.537 04:59:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:59.799 Process with pid 77350 is not found 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77350 00:36:59.799 04:59:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77350 ']' 00:36:59.799 04:59:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77350 00:36:59.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77350) - No such process 00:36:59.799 04:59:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77350 is not found' 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:36:59.799 Remove shared memory files 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:59.799 04:59:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:36:59.799 00:36:59.799 real 4m39.716s 00:36:59.799 user 4m27.842s 00:36:59.799 sys 0m11.688s 00:36:59.799 ************************************ 00:36:59.799 END TEST ftl_restore 00:36:59.799 04:59:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.799 04:59:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:36:59.799 ************************************ 00:36:59.799 04:59:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:59.799 04:59:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:59.799 04:59:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.799 04:59:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:59.799 ************************************ 00:36:59.799 START TEST ftl_dirty_shutdown 00:36:59.799 ************************************ 00:36:59.799 04:59:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:59.799 * Looking for test storage... 00:37:00.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.061 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:00.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.062 --rc genhtml_branch_coverage=1 00:37:00.062 --rc genhtml_function_coverage=1 00:37:00.062 --rc genhtml_legend=1 00:37:00.062 --rc geninfo_all_blocks=1 00:37:00.062 --rc geninfo_unexecuted_blocks=1 00:37:00.062 00:37:00.062 ' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:00.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.062 --rc genhtml_branch_coverage=1 00:37:00.062 --rc genhtml_function_coverage=1 00:37:00.062 --rc genhtml_legend=1 00:37:00.062 --rc geninfo_all_blocks=1 00:37:00.062 --rc geninfo_unexecuted_blocks=1 00:37:00.062 00:37:00.062 ' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:00.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.062 --rc genhtml_branch_coverage=1 00:37:00.062 --rc genhtml_function_coverage=1 00:37:00.062 --rc genhtml_legend=1 00:37:00.062 --rc geninfo_all_blocks=1 00:37:00.062 --rc geninfo_unexecuted_blocks=1 00:37:00.062 00:37:00.062 ' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:00.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.062 --rc genhtml_branch_coverage=1 00:37:00.062 --rc genhtml_function_coverage=1 00:37:00.062 --rc genhtml_legend=1 00:37:00.062 --rc geninfo_all_blocks=1 00:37:00.062 --rc geninfo_unexecuted_blocks=1 00:37:00.062 00:37:00.062 ' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80302 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80302 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80302 ']' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:00.062 04:59:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:00.062 [2024-11-27 04:59:07.186473] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:37:00.062 [2024-11-27 04:59:07.186616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80302 ] 00:37:00.324 [2024-11-27 04:59:07.344906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.324 [2024-11-27 04:59:07.463225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:01.269 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:01.531 { 00:37:01.531 "name": "nvme0n1", 00:37:01.531 "aliases": [ 00:37:01.531 "c14011a3-006e-4d5d-be8d-1ebda7930998" 00:37:01.531 ], 00:37:01.531 "product_name": "NVMe disk", 00:37:01.531 "block_size": 4096, 00:37:01.531 "num_blocks": 1310720, 00:37:01.531 "uuid": "c14011a3-006e-4d5d-be8d-1ebda7930998", 00:37:01.531 "numa_id": -1, 00:37:01.531 "assigned_rate_limits": { 00:37:01.531 "rw_ios_per_sec": 0, 00:37:01.531 "rw_mbytes_per_sec": 0, 00:37:01.531 "r_mbytes_per_sec": 0, 00:37:01.531 "w_mbytes_per_sec": 0 00:37:01.531 }, 00:37:01.531 "claimed": true, 00:37:01.531 "claim_type": "read_many_write_one", 00:37:01.531 "zoned": false, 00:37:01.531 "supported_io_types": { 00:37:01.531 "read": true, 00:37:01.531 "write": true, 00:37:01.531 "unmap": true, 00:37:01.531 "flush": true, 00:37:01.531 "reset": true, 00:37:01.531 "nvme_admin": true, 00:37:01.531 "nvme_io": true, 00:37:01.531 "nvme_io_md": false, 00:37:01.531 "write_zeroes": true, 00:37:01.531 "zcopy": false, 00:37:01.531 "get_zone_info": false, 00:37:01.531 "zone_management": false, 00:37:01.531 "zone_append": false, 00:37:01.531 "compare": true, 00:37:01.531 "compare_and_write": false, 00:37:01.531 "abort": true, 00:37:01.531 "seek_hole": false, 00:37:01.531 "seek_data": false, 00:37:01.531 "copy": true, 00:37:01.531 "nvme_iov_md": false 00:37:01.531 }, 00:37:01.531 "driver_specific": { 00:37:01.531 "nvme": [ 00:37:01.531 { 00:37:01.531 "pci_address": "0000:00:11.0", 00:37:01.531 "trid": { 00:37:01.531 "trtype": "PCIe", 00:37:01.531 "traddr": "0000:00:11.0" 00:37:01.531 }, 00:37:01.531 "ctrlr_data": { 00:37:01.531 "cntlid": 0, 00:37:01.531 "vendor_id": "0x1b36", 00:37:01.531 "model_number": "QEMU NVMe Ctrl", 00:37:01.531 "serial_number": "12341", 00:37:01.531 "firmware_revision": "8.0.0", 00:37:01.531 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:01.531 "oacs": { 00:37:01.531 "security": 0, 00:37:01.531 "format": 1, 00:37:01.531 "firmware": 0, 00:37:01.531 "ns_manage": 1 00:37:01.531 }, 00:37:01.531 "multi_ctrlr": false, 00:37:01.531 "ana_reporting": false 00:37:01.531 }, 00:37:01.531 "vs": { 00:37:01.531 "nvme_version": "1.4" 00:37:01.531 }, 00:37:01.531 "ns_data": { 00:37:01.531 "id": 1, 00:37:01.531 "can_share": false 00:37:01.531 } 00:37:01.531 } 00:37:01.531 ], 00:37:01.531 "mp_policy": "active_passive" 00:37:01.531 } 00:37:01.531 } 00:37:01.531 ]' 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:01.531 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b2a62cea-fe9a-4e90-9b2b-7d890fe61b51 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:37:01.793 04:59:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2a62cea-fe9a-4e90-9b2b-7d890fe61b51 00:37:02.054 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:02.314 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=797a22e6-4fa1-488c-9a3d-f30535e7f51c 00:37:02.314 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 797a22e6-4fa1-488c-9a3d-f30535e7f51c 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:02.575 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:02.837 { 00:37:02.837 "name": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:02.837 "aliases": [ 00:37:02.837 "lvs/nvme0n1p0" 00:37:02.837 ], 00:37:02.837 "product_name": "Logical Volume", 00:37:02.837 "block_size": 4096, 00:37:02.837 "num_blocks": 26476544, 00:37:02.837 "uuid": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:02.837 "assigned_rate_limits": { 00:37:02.837 "rw_ios_per_sec": 0, 00:37:02.837 "rw_mbytes_per_sec": 0, 00:37:02.837 "r_mbytes_per_sec": 0, 00:37:02.837 "w_mbytes_per_sec": 0 00:37:02.837 }, 00:37:02.837 "claimed": false, 00:37:02.837 "zoned": false, 00:37:02.837 "supported_io_types": { 00:37:02.837 "read": true, 00:37:02.837 "write": true, 00:37:02.837 "unmap": true, 00:37:02.837 "flush": false, 00:37:02.837 "reset": true, 00:37:02.837 "nvme_admin": false, 00:37:02.837 "nvme_io": false, 00:37:02.837 "nvme_io_md": false, 00:37:02.837 "write_zeroes": true, 00:37:02.837 "zcopy": false, 00:37:02.837 "get_zone_info": false, 00:37:02.837 "zone_management": false, 00:37:02.837 "zone_append": false, 00:37:02.837 "compare": false, 00:37:02.837 "compare_and_write": false, 00:37:02.837 "abort": false, 00:37:02.837 "seek_hole": true, 00:37:02.837 "seek_data": true, 00:37:02.837 "copy": false, 00:37:02.837 "nvme_iov_md": false 00:37:02.837 }, 00:37:02.837 "driver_specific": { 00:37:02.837 "lvol": { 00:37:02.837 "lvol_store_uuid": "797a22e6-4fa1-488c-9a3d-f30535e7f51c", 00:37:02.837 "base_bdev": "nvme0n1", 00:37:02.837 "thin_provision": true, 00:37:02.837 "num_allocated_clusters": 0, 00:37:02.837 "snapshot": false, 00:37:02.837 "clone": false, 00:37:02.837 "esnap_clone": false 00:37:02.837 } 00:37:02.837 } 00:37:02.837 } 00:37:02.837 ]' 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:37:02.837 04:59:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:03.096 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:03.097 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:03.097 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.355 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:03.355 { 00:37:03.355 "name": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:03.355 "aliases": [ 00:37:03.355 "lvs/nvme0n1p0" 00:37:03.355 ], 00:37:03.355 "product_name": "Logical Volume", 00:37:03.355 "block_size": 4096, 00:37:03.355 "num_blocks": 26476544, 00:37:03.355 "uuid": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:03.355 "assigned_rate_limits": { 00:37:03.355 "rw_ios_per_sec": 0, 00:37:03.355 "rw_mbytes_per_sec": 0, 00:37:03.355 "r_mbytes_per_sec": 0, 00:37:03.355 "w_mbytes_per_sec": 0 00:37:03.355 }, 00:37:03.355 "claimed": false, 00:37:03.355 "zoned": false, 00:37:03.355 "supported_io_types": { 00:37:03.355 "read": true, 00:37:03.355 "write": true, 00:37:03.355 "unmap": true, 00:37:03.355 "flush": false, 00:37:03.355 "reset": true, 00:37:03.355 "nvme_admin": false, 00:37:03.355 "nvme_io": false, 00:37:03.355 "nvme_io_md": false, 00:37:03.355 "write_zeroes": true, 00:37:03.355 "zcopy": false, 00:37:03.355 "get_zone_info": false, 00:37:03.355 "zone_management": false, 00:37:03.355 "zone_append": false, 00:37:03.355 "compare": false, 00:37:03.355 "compare_and_write": false, 00:37:03.355 "abort": false, 00:37:03.355 "seek_hole": true, 00:37:03.355 "seek_data": true, 00:37:03.355 "copy": false, 00:37:03.355 "nvme_iov_md": false 00:37:03.355 }, 00:37:03.355 "driver_specific": { 00:37:03.355 "lvol": { 00:37:03.355 "lvol_store_uuid": "797a22e6-4fa1-488c-9a3d-f30535e7f51c", 00:37:03.356 "base_bdev": "nvme0n1", 00:37:03.356 "thin_provision": true, 00:37:03.356 "num_allocated_clusters": 0, 00:37:03.356 "snapshot": false, 00:37:03.356 "clone": false, 00:37:03.356 "esnap_clone": false 00:37:03.356 } 00:37:03.356 } 00:37:03.356 } 00:37:03.356 ]' 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:37:03.356 04:59:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:03.614 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:03.873 { 00:37:03.873 "name": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:03.873 "aliases": [ 00:37:03.873 "lvs/nvme0n1p0" 00:37:03.873 ], 00:37:03.873 "product_name": "Logical Volume", 00:37:03.873 "block_size": 4096, 00:37:03.873 "num_blocks": 26476544, 00:37:03.873 "uuid": "8a910e5d-eb2e-45ca-8b1d-e380d9be6526", 00:37:03.873 "assigned_rate_limits": { 00:37:03.873 "rw_ios_per_sec": 0, 00:37:03.873 "rw_mbytes_per_sec": 0, 00:37:03.873 "r_mbytes_per_sec": 0, 00:37:03.873 "w_mbytes_per_sec": 0 00:37:03.873 }, 00:37:03.873 "claimed": false, 00:37:03.873 "zoned": false, 00:37:03.873 "supported_io_types": { 00:37:03.873 "read": true, 00:37:03.873 "write": true, 00:37:03.873 "unmap": true, 00:37:03.873 "flush": false, 00:37:03.873 "reset": true, 00:37:03.873 "nvme_admin": false, 00:37:03.873 "nvme_io": false, 00:37:03.873 "nvme_io_md": false, 00:37:03.873 "write_zeroes": true, 00:37:03.873 "zcopy": false, 00:37:03.873 "get_zone_info": false, 00:37:03.873 "zone_management": false, 00:37:03.873 "zone_append": false, 00:37:03.873 "compare": false, 00:37:03.873 "compare_and_write": false, 00:37:03.873 "abort": false, 00:37:03.873 "seek_hole": true, 00:37:03.873 "seek_data": true, 00:37:03.873 "copy": false, 00:37:03.873 "nvme_iov_md": false 00:37:03.873 }, 00:37:03.873 "driver_specific": { 00:37:03.873 "lvol": { 00:37:03.873 "lvol_store_uuid": "797a22e6-4fa1-488c-9a3d-f30535e7f51c", 00:37:03.873 "base_bdev": "nvme0n1", 00:37:03.873 "thin_provision": true, 00:37:03.873 "num_allocated_clusters": 0, 00:37:03.873 "snapshot": false, 00:37:03.873 "clone": false, 00:37:03.873 "esnap_clone": false 00:37:03.873 } 00:37:03.873 } 00:37:03.873 } 00:37:03.873 ]' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 --l2p_dram_limit 10' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:37:03.873 04:59:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8a910e5d-eb2e-45ca-8b1d-e380d9be6526 --l2p_dram_limit 10 -c nvc0n1p0 00:37:04.135 [2024-11-27 04:59:11.148719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.148756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:04.135 [2024-11-27 04:59:11.148769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:04.135 [2024-11-27 04:59:11.148775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.148822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.148830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:04.135 [2024-11-27 04:59:11.148838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:37:04.135 [2024-11-27 04:59:11.148843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.148860] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:04.135 [2024-11-27 04:59:11.149457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:04.135 [2024-11-27 04:59:11.149474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.149480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:04.135 [2024-11-27 04:59:11.149488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:37:04.135 [2024-11-27 04:59:11.149493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.149522] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fbf2820b-ef9f-4f2f-b29c-d3b26af8645f 00:37:04.135 [2024-11-27 04:59:11.150483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.150502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:04.135 [2024-11-27 04:59:11.150510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:37:04.135 [2024-11-27 04:59:11.150520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.155387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.155415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:04.135 [2024-11-27 04:59:11.155423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:37:04.135 [2024-11-27 04:59:11.155431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.155497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.155506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:04.135 [2024-11-27 04:59:11.155512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:37:04.135 [2024-11-27 04:59:11.155521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.155550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.155558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:04.135 [2024-11-27 04:59:11.155565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:04.135 [2024-11-27 04:59:11.155572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.155589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:04.135 [2024-11-27 04:59:11.158504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.158527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:04.135 [2024-11-27 04:59:11.158537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:37:04.135 [2024-11-27 04:59:11.158543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.158569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.158576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:04.135 [2024-11-27 04:59:11.158583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:04.135 [2024-11-27 04:59:11.158589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.158614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:04.135 [2024-11-27 04:59:11.158719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:04.135 [2024-11-27 04:59:11.158731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:04.135 [2024-11-27 04:59:11.158740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:04.135 [2024-11-27 04:59:11.158748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:04.135 [2024-11-27 04:59:11.158755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:04.135 [2024-11-27 04:59:11.158762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:04.135 [2024-11-27 04:59:11.158769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:04.135 [2024-11-27 04:59:11.158776] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:04.135 [2024-11-27 04:59:11.158782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:04.135 [2024-11-27 04:59:11.158789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.158800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:04.135 [2024-11-27 04:59:11.158807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:37:04.135 [2024-11-27 04:59:11.158813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.158879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.135 [2024-11-27 04:59:11.158886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:04.135 [2024-11-27 04:59:11.158893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:37:04.135 [2024-11-27 04:59:11.158898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.135 [2024-11-27 04:59:11.158977] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:04.135 [2024-11-27 04:59:11.158984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:04.135 [2024-11-27 04:59:11.158992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:04.135 [2024-11-27 04:59:11.158998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.135 [2024-11-27 04:59:11.159005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:04.135 [2024-11-27 04:59:11.159010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:04.135 [2024-11-27 04:59:11.159017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:04.135 [2024-11-27 04:59:11.159022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:04.135 [2024-11-27 04:59:11.159028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:04.135 [2024-11-27 04:59:11.159034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:04.136 [2024-11-27 04:59:11.159040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:04.136 [2024-11-27 04:59:11.159046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:04.136 [2024-11-27 04:59:11.159052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:04.136 [2024-11-27 04:59:11.159057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:04.136 [2024-11-27 04:59:11.159080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:04.136 [2024-11-27 04:59:11.159086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:04.136 [2024-11-27 04:59:11.159100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:04.136 [2024-11-27 04:59:11.159120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:04.136 [2024-11-27 04:59:11.159137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:04.136 [2024-11-27 04:59:11.159155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:04.136 [2024-11-27 04:59:11.159172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:04.136 [2024-11-27 04:59:11.159192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:04.136 [2024-11-27 04:59:11.159203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:04.136 [2024-11-27 04:59:11.159208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:04.136 [2024-11-27 04:59:11.159214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:04.136 [2024-11-27 04:59:11.159219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:04.136 [2024-11-27 04:59:11.159225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:04.136 [2024-11-27 04:59:11.159230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:04.136 [2024-11-27 04:59:11.159242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:04.136 [2024-11-27 04:59:11.159248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159253] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:04.136 [2024-11-27 04:59:11.159260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:04.136 [2024-11-27 04:59:11.159265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.136 [2024-11-27 04:59:11.159280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:04.136 [2024-11-27 04:59:11.159288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:04.136 [2024-11-27 04:59:11.159293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:04.136 [2024-11-27 04:59:11.159300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:04.136 [2024-11-27 04:59:11.159304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:04.136 [2024-11-27 04:59:11.159311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:04.136 [2024-11-27 04:59:11.159319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:04.136 [2024-11-27 04:59:11.159329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:04.136 [2024-11-27 04:59:11.159343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:04.136 [2024-11-27 04:59:11.159348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:04.136 [2024-11-27 04:59:11.159355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:04.136 [2024-11-27 04:59:11.159360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:04.136 [2024-11-27 04:59:11.159367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:04.136 [2024-11-27 04:59:11.159372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:04.136 [2024-11-27 04:59:11.159378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:04.136 [2024-11-27 04:59:11.159384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:04.136 [2024-11-27 04:59:11.159392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:04.136 [2024-11-27 04:59:11.159423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:04.136 [2024-11-27 04:59:11.159430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:04.136 [2024-11-27 04:59:11.159443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:04.136 [2024-11-27 04:59:11.159448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:04.136 [2024-11-27 04:59:11.159456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:04.136 [2024-11-27 04:59:11.159462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.136 [2024-11-27 04:59:11.159468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:04.136 [2024-11-27 04:59:11.159474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:37:04.136 [2024-11-27 04:59:11.159482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.136 [2024-11-27 04:59:11.159522] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:04.136 [2024-11-27 04:59:11.159533] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:37:08.345 [2024-11-27 04:59:14.793313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.793379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:37:08.345 [2024-11-27 04:59:14.793396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3633.762 ms 00:37:08.345 [2024-11-27 04:59:14.793408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.826908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.826973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:08.345 [2024-11-27 04:59:14.826990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.244 ms 00:37:08.345 [2024-11-27 04:59:14.827002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.827182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.827197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:08.345 [2024-11-27 04:59:14.827206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:37:08.345 [2024-11-27 04:59:14.827223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.862719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.862767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:08.345 [2024-11-27 04:59:14.862779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.443 ms 00:37:08.345 [2024-11-27 04:59:14.862790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.862830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.862841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:08.345 [2024-11-27 04:59:14.862851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:08.345 [2024-11-27 04:59:14.862869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.863439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.863468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:08.345 [2024-11-27 04:59:14.863479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:37:08.345 [2024-11-27 04:59:14.863491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.863606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.863621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:08.345 [2024-11-27 04:59:14.863630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:37:08.345 [2024-11-27 04:59:14.863643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.880767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.880807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:08.345 [2024-11-27 04:59:14.880817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 00:37:08.345 [2024-11-27 04:59:14.880827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:14.912649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:08.345 [2024-11-27 04:59:14.916497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.345 [2024-11-27 04:59:14.916536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:08.345 [2024-11-27 04:59:14.916551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.579 ms 00:37:08.345 [2024-11-27 04:59:14.916559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.345 [2024-11-27 04:59:15.010773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.010834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:37:08.346 [2024-11-27 04:59:15.010852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.161 ms 00:37:08.346 [2024-11-27 04:59:15.010861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.011092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.011106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:08.346 [2024-11-27 04:59:15.011121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:37:08.346 [2024-11-27 04:59:15.011129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.037289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.037352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:37:08.346 [2024-11-27 04:59:15.037370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.102 ms 00:37:08.346 [2024-11-27 04:59:15.037378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.062084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.062125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:37:08.346 [2024-11-27 04:59:15.062140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.649 ms 00:37:08.346 [2024-11-27 04:59:15.062147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.062769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.062787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:08.346 [2024-11-27 04:59:15.062802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:37:08.346 [2024-11-27 04:59:15.062810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.142644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.142695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:37:08.346 [2024-11-27 04:59:15.142716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.784 ms 00:37:08.346 [2024-11-27 04:59:15.142726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.170461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.170510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:37:08.346 [2024-11-27 04:59:15.170527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.621 ms 00:37:08.346 [2024-11-27 04:59:15.170535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.196157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.196203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:37:08.346 [2024-11-27 04:59:15.196218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:37:08.346 [2024-11-27 04:59:15.196225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.222280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.222321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:08.346 [2024-11-27 04:59:15.222337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.999 ms 00:37:08.346 [2024-11-27 04:59:15.222346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.222402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.222413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:08.346 [2024-11-27 04:59:15.222429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:08.346 [2024-11-27 04:59:15.222437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.222531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.346 [2024-11-27 04:59:15.222544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:08.346 [2024-11-27 04:59:15.222555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:37:08.346 [2024-11-27 04:59:15.222563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.346 [2024-11-27 04:59:15.223849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4074.616 ms, result 0 00:37:08.346 { 00:37:08.346 "name": "ftl0", 00:37:08.346 "uuid": "fbf2820b-ef9f-4f2f-b29c-d3b26af8645f" 00:37:08.346 } 00:37:08.346 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:37:08.346 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:37:08.346 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:37:08.346 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:37:08.346 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:37:08.607 /dev/nbd0 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:37:08.607 1+0 records in 00:37:08.607 1+0 records out 00:37:08.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418405 s, 9.8 MB/s 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:37:08.607 04:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:37:08.867 [2024-11-27 04:59:15.816775] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:37:08.867 [2024-11-27 04:59:15.816924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80444 ] 00:37:08.867 [2024-11-27 04:59:15.984282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.128 [2024-11-27 04:59:16.138909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.504  [2024-11-27T04:59:18.641Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-27T04:59:19.574Z] Copying: 437/1024 [MB] (246 MBps) [2024-11-27T04:59:20.510Z] Copying: 689/1024 [MB] (252 MBps) [2024-11-27T04:59:21.076Z] Copying: 938/1024 [MB] (248 MBps) [2024-11-27T04:59:21.643Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:37:14.441 00:37:14.441 04:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:16.342 04:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:37:16.342 [2024-11-27 04:59:23.204445] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:37:16.342 [2024-11-27 04:59:23.204557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80528 ] 00:37:16.342 [2024-11-27 04:59:23.361411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.342 [2024-11-27 04:59:23.467142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.721  [2024-11-27T04:59:25.867Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T04:59:26.811Z] Copying: 34/1024 [MB] (18 MBps) [2024-11-27T04:59:27.754Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-27T04:59:29.141Z] Copying: 64/1024 [MB] (16 MBps) [2024-11-27T04:59:29.713Z] Copying: 77/1024 [MB] (13 MBps) [2024-11-27T04:59:31.130Z] Copying: 92/1024 [MB] (14 MBps) [2024-11-27T04:59:31.740Z] Copying: 105/1024 [MB] (13 MBps) [2024-11-27T04:59:33.117Z] Copying: 129/1024 [MB] (23 MBps) [2024-11-27T04:59:34.054Z] Copying: 159/1024 [MB] (30 MBps) [2024-11-27T04:59:34.991Z] Copying: 187/1024 [MB] (27 MBps) [2024-11-27T04:59:35.925Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-27T04:59:36.859Z] Copying: 243/1024 [MB] (35 MBps) [2024-11-27T04:59:37.792Z] Copying: 279/1024 [MB] (35 MBps) [2024-11-27T04:59:38.723Z] Copying: 313/1024 [MB] (33 MBps) [2024-11-27T04:59:40.092Z] Copying: 348/1024 [MB] (34 MBps) [2024-11-27T04:59:41.029Z] Copying: 382/1024 [MB] (34 MBps) [2024-11-27T04:59:41.961Z] Copying: 403/1024 [MB] (20 MBps) [2024-11-27T04:59:42.892Z] Copying: 433/1024 [MB] (30 MBps) [2024-11-27T04:59:43.825Z] Copying: 468/1024 [MB] (34 MBps) [2024-11-27T04:59:44.757Z] Copying: 503/1024 [MB] (34 MBps) [2024-11-27T04:59:46.145Z] Copying: 538/1024 [MB] (35 MBps) [2024-11-27T04:59:46.784Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-27T04:59:47.723Z] Copying: 573/1024 [MB] (23 MBps) [2024-11-27T04:59:49.108Z] Copying: 593/1024 [MB] (19 MBps) [2024-11-27T04:59:50.047Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-27T04:59:50.985Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-27T04:59:51.924Z] Copying: 640/1024 [MB] (19 MBps) [2024-11-27T04:59:52.859Z] Copying: 657/1024 [MB] (16 MBps) [2024-11-27T04:59:53.803Z] Copying: 681/1024 [MB] (23 MBps) [2024-11-27T04:59:54.744Z] Copying: 695/1024 [MB] (14 MBps) [2024-11-27T04:59:56.127Z] Copying: 711/1024 [MB] (15 MBps) [2024-11-27T04:59:57.069Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-27T04:59:58.009Z] Copying: 745/1024 [MB] (17 MBps) [2024-11-27T04:59:58.949Z] Copying: 762/1024 [MB] (17 MBps) [2024-11-27T04:59:59.880Z] Copying: 780/1024 [MB] (17 MBps) [2024-11-27T05:00:00.812Z] Copying: 812/1024 [MB] (32 MBps) [2024-11-27T05:00:01.761Z] Copying: 845/1024 [MB] (32 MBps) [2024-11-27T05:00:03.158Z] Copying: 878/1024 [MB] (33 MBps) [2024-11-27T05:00:03.749Z] Copying: 912/1024 [MB] (33 MBps) [2024-11-27T05:00:05.132Z] Copying: 928/1024 [MB] (16 MBps) [2024-11-27T05:00:06.076Z] Copying: 942/1024 [MB] (13 MBps) [2024-11-27T05:00:07.017Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-27T05:00:07.949Z] Copying: 972/1024 [MB] (17 MBps) [2024-11-27T05:00:08.514Z] Copying: 1004/1024 [MB] (32 MBps) [2024-11-27T05:00:09.108Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:38:01.905 00:38:01.905 05:00:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:38:01.905 05:00:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:38:02.165 05:00:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:02.165 [2024-11-27 05:00:09.324051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.324100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:02.165 [2024-11-27 05:00:09.324111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:02.165 [2024-11-27 05:00:09.324121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.165 [2024-11-27 05:00:09.324139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:02.165 [2024-11-27 05:00:09.326259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.326286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:02.165 [2024-11-27 05:00:09.326296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.105 ms 00:38:02.165 [2024-11-27 05:00:09.326302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.165 [2024-11-27 05:00:09.328116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.328144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:02.165 [2024-11-27 05:00:09.328153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:38:02.165 [2024-11-27 05:00:09.328159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.165 [2024-11-27 05:00:09.342144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.342172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:02.165 [2024-11-27 05:00:09.342183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.967 ms 00:38:02.165 [2024-11-27 05:00:09.342189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.165 [2024-11-27 05:00:09.346967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.346991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:02.165 [2024-11-27 05:00:09.347001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.751 ms 00:38:02.165 [2024-11-27 05:00:09.347008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.165 [2024-11-27 05:00:09.365152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.165 [2024-11-27 05:00:09.365180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:02.165 [2024-11-27 05:00:09.365190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.085 ms 00:38:02.165 [2024-11-27 05:00:09.365196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.377540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.377568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:02.424 [2024-11-27 05:00:09.377581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.310 ms 00:38:02.424 [2024-11-27 05:00:09.377588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.377693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.377701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:02.424 [2024-11-27 05:00:09.377709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:38:02.424 [2024-11-27 05:00:09.377715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.395189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.395223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:02.424 [2024-11-27 05:00:09.395233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.459 ms 00:38:02.424 [2024-11-27 05:00:09.395239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.412054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.412087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:02.424 [2024-11-27 05:00:09.412096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.785 ms 00:38:02.424 [2024-11-27 05:00:09.412102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.429344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.429369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:02.424 [2024-11-27 05:00:09.429378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:38:02.424 [2024-11-27 05:00:09.429384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.446512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.424 [2024-11-27 05:00:09.446537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:02.424 [2024-11-27 05:00:09.446546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.072 ms 00:38:02.424 [2024-11-27 05:00:09.446552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.424 [2024-11-27 05:00:09.446580] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:02.424 [2024-11-27 05:00:09.446591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:02.424 [2024-11-27 05:00:09.446886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.446997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:02.425 [2024-11-27 05:00:09.447275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:02.425 [2024-11-27 05:00:09.447283] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbf2820b-ef9f-4f2f-b29c-d3b26af8645f 00:38:02.425 [2024-11-27 05:00:09.447289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:02.425 [2024-11-27 05:00:09.447297] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:02.425 [2024-11-27 05:00:09.447305] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:02.425 [2024-11-27 05:00:09.447312] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:02.425 [2024-11-27 05:00:09.447317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:02.425 [2024-11-27 05:00:09.447324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:02.425 [2024-11-27 05:00:09.447331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:02.425 [2024-11-27 05:00:09.447337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:02.425 [2024-11-27 05:00:09.447342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:02.425 [2024-11-27 05:00:09.447348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.425 [2024-11-27 05:00:09.447354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:02.425 [2024-11-27 05:00:09.447362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:38:02.425 [2024-11-27 05:00:09.447367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.425 [2024-11-27 05:00:09.457083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.425 [2024-11-27 05:00:09.457108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:02.426 [2024-11-27 05:00:09.457117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.691 ms 00:38:02.426 [2024-11-27 05:00:09.457123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.457402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:02.426 [2024-11-27 05:00:09.457415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:02.426 [2024-11-27 05:00:09.457423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:38:02.426 [2024-11-27 05:00:09.457430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.490525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.490552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:02.426 [2024-11-27 05:00:09.490562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.490568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.490609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.490615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:02.426 [2024-11-27 05:00:09.490623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.490629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.490682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.490691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:02.426 [2024-11-27 05:00:09.490699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.490705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.490721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.490727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:02.426 [2024-11-27 05:00:09.490734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.490740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.550609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.550642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:02.426 [2024-11-27 05:00:09.550651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.550657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.599707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.599739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:02.426 [2024-11-27 05:00:09.599749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.599755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.599821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.599829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:02.426 [2024-11-27 05:00:09.599838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.599844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.599880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.599887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:02.426 [2024-11-27 05:00:09.599895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.599901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.599970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.599979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:02.426 [2024-11-27 05:00:09.599986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.599993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.600019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.600025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:02.426 [2024-11-27 05:00:09.600033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.600039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.600085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.600092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:02.426 [2024-11-27 05:00:09.600100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.600107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.600143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:02.426 [2024-11-27 05:00:09.600151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:02.426 [2024-11-27 05:00:09.600159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:02.426 [2024-11-27 05:00:09.600164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:02.426 [2024-11-27 05:00:09.600267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.188 ms, result 0 00:38:02.426 true 00:38:02.426 05:00:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80302 00:38:02.426 05:00:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80302 00:38:02.686 05:00:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:38:02.686 [2024-11-27 05:00:09.688641] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:38:02.686 [2024-11-27 05:00:09.688755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81012 ] 00:38:02.686 [2024-11-27 05:00:09.848581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.946 [2024-11-27 05:00:09.945388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.331  [2024-11-27T05:00:12.467Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-27T05:00:13.401Z] Copying: 441/1024 [MB] (251 MBps) [2024-11-27T05:00:14.334Z] Copying: 697/1024 [MB] (256 MBps) [2024-11-27T05:00:14.592Z] Copying: 952/1024 [MB] (255 MBps) [2024-11-27T05:00:15.160Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:38:07.957 00:38:07.957 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80302 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:38:07.957 05:00:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:07.957 [2024-11-27 05:00:15.094550] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:38:07.957 [2024-11-27 05:00:15.094638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81069 ] 00:38:08.216 [2024-11-27 05:00:15.245550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.216 [2024-11-27 05:00:15.320854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.473 [2024-11-27 05:00:15.531075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:08.473 [2024-11-27 05:00:15.531126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:08.473 [2024-11-27 05:00:15.593709] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:08.473 [2024-11-27 05:00:15.593996] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:08.473 [2024-11-27 05:00:15.594334] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:08.733 [2024-11-27 05:00:15.804814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.804849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:08.733 [2024-11-27 05:00:15.804859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:08.733 [2024-11-27 05:00:15.804867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.804900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.804908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:08.733 [2024-11-27 05:00:15.804914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:38:08.733 [2024-11-27 05:00:15.804920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.804932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:08.733 [2024-11-27 05:00:15.805465] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:08.733 [2024-11-27 05:00:15.805482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.805489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:08.733 [2024-11-27 05:00:15.805495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:38:08.733 [2024-11-27 05:00:15.805501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.806412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:08.733 [2024-11-27 05:00:15.816369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.816398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:08.733 [2024-11-27 05:00:15.816406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.959 ms 00:38:08.733 [2024-11-27 05:00:15.816412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.816452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.816460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:08.733 [2024-11-27 05:00:15.816467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:08.733 [2024-11-27 05:00:15.816472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.820797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.820821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:08.733 [2024-11-27 05:00:15.820828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:38:08.733 [2024-11-27 05:00:15.820834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.820886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.820893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:08.733 [2024-11-27 05:00:15.820899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:38:08.733 [2024-11-27 05:00:15.820905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.820937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.820944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:08.733 [2024-11-27 05:00:15.820950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:08.733 [2024-11-27 05:00:15.820956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.820969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:08.733 [2024-11-27 05:00:15.823571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.823595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:08.733 [2024-11-27 05:00:15.823603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:38:08.733 [2024-11-27 05:00:15.823608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.823634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.823640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:08.733 [2024-11-27 05:00:15.823646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:08.733 [2024-11-27 05:00:15.823652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.733 [2024-11-27 05:00:15.823667] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:08.733 [2024-11-27 05:00:15.823681] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:08.733 [2024-11-27 05:00:15.823707] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:08.733 [2024-11-27 05:00:15.823719] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:08.733 [2024-11-27 05:00:15.823797] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:08.733 [2024-11-27 05:00:15.823806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:08.733 [2024-11-27 05:00:15.823814] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:08.733 [2024-11-27 05:00:15.823823] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:08.733 [2024-11-27 05:00:15.823830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:08.733 [2024-11-27 05:00:15.823836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:08.733 [2024-11-27 05:00:15.823841] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:08.733 [2024-11-27 05:00:15.823847] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:08.733 [2024-11-27 05:00:15.823852] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:08.733 [2024-11-27 05:00:15.823858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.733 [2024-11-27 05:00:15.823863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:08.734 [2024-11-27 05:00:15.823869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:38:08.734 [2024-11-27 05:00:15.823874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.823936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.734 [2024-11-27 05:00:15.823945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:08.734 [2024-11-27 05:00:15.823950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:38:08.734 [2024-11-27 05:00:15.823956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.824031] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:08.734 [2024-11-27 05:00:15.824044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:08.734 [2024-11-27 05:00:15.824051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:08.734 [2024-11-27 05:00:15.824078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:08.734 [2024-11-27 05:00:15.824095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:08.734 [2024-11-27 05:00:15.824109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:08.734 [2024-11-27 05:00:15.824114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:08.734 [2024-11-27 05:00:15.824120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:08.734 [2024-11-27 05:00:15.824125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:08.734 [2024-11-27 05:00:15.824131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:08.734 [2024-11-27 05:00:15.824137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:08.734 [2024-11-27 05:00:15.824148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:08.734 [2024-11-27 05:00:15.824162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:08.734 [2024-11-27 05:00:15.824178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:08.734 [2024-11-27 05:00:15.824192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:08.734 [2024-11-27 05:00:15.824207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:08.734 [2024-11-27 05:00:15.824222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:08.734 [2024-11-27 05:00:15.824232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:08.734 [2024-11-27 05:00:15.824237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:08.734 [2024-11-27 05:00:15.824242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:08.734 [2024-11-27 05:00:15.824247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:08.734 [2024-11-27 05:00:15.824252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:08.734 [2024-11-27 05:00:15.824256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:08.734 [2024-11-27 05:00:15.824266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:08.734 [2024-11-27 05:00:15.824271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:08.734 [2024-11-27 05:00:15.824282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:08.734 [2024-11-27 05:00:15.824289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:08.734 [2024-11-27 05:00:15.824301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:08.734 [2024-11-27 05:00:15.824306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:08.734 [2024-11-27 05:00:15.824311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:08.734 [2024-11-27 05:00:15.824316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:08.734 [2024-11-27 05:00:15.824321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:08.734 [2024-11-27 05:00:15.824326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:08.734 [2024-11-27 05:00:15.824332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:08.734 [2024-11-27 05:00:15.824339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:08.734 [2024-11-27 05:00:15.824350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:08.734 [2024-11-27 05:00:15.824355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:08.734 [2024-11-27 05:00:15.824361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:08.734 [2024-11-27 05:00:15.824366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:08.734 [2024-11-27 05:00:15.824372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:08.734 [2024-11-27 05:00:15.824377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:08.734 [2024-11-27 05:00:15.824382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:08.734 [2024-11-27 05:00:15.824387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:08.734 [2024-11-27 05:00:15.824392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:08.734 [2024-11-27 05:00:15.824419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:08.734 [2024-11-27 05:00:15.824425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:08.734 [2024-11-27 05:00:15.824436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:08.734 [2024-11-27 05:00:15.824441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:08.734 [2024-11-27 05:00:15.824447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:08.734 [2024-11-27 05:00:15.824452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.734 [2024-11-27 05:00:15.824458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:08.734 [2024-11-27 05:00:15.824464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:38:08.734 [2024-11-27 05:00:15.824469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.845034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.734 [2024-11-27 05:00:15.845062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:08.734 [2024-11-27 05:00:15.845084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.531 ms 00:38:08.734 [2024-11-27 05:00:15.845090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.845160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.734 [2024-11-27 05:00:15.845167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:08.734 [2024-11-27 05:00:15.845173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:38:08.734 [2024-11-27 05:00:15.845180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.882701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.734 [2024-11-27 05:00:15.882736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:08.734 [2024-11-27 05:00:15.882749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.480 ms 00:38:08.734 [2024-11-27 05:00:15.882755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.734 [2024-11-27 05:00:15.882791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.882799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:08.735 [2024-11-27 05:00:15.882805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:38:08.735 [2024-11-27 05:00:15.882811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.883147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.883161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:08.735 [2024-11-27 05:00:15.883169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:38:08.735 [2024-11-27 05:00:15.883180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.883275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.883290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:08.735 [2024-11-27 05:00:15.883296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:38:08.735 [2024-11-27 05:00:15.883302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.893634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.893659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:08.735 [2024-11-27 05:00:15.893667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.316 ms 00:38:08.735 [2024-11-27 05:00:15.893673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.903552] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:08.735 [2024-11-27 05:00:15.903577] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:08.735 [2024-11-27 05:00:15.903587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.903593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:08.735 [2024-11-27 05:00:15.903600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.842 ms 00:38:08.735 [2024-11-27 05:00:15.903606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.922492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.922520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:08.735 [2024-11-27 05:00:15.922529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.855 ms 00:38:08.735 [2024-11-27 05:00:15.922536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.735 [2024-11-27 05:00:15.931571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.735 [2024-11-27 05:00:15.931596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:08.735 [2024-11-27 05:00:15.931604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.999 ms 00:38:08.735 [2024-11-27 05:00:15.931609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.941916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.941951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:08.993 [2024-11-27 05:00:15.941964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:38:08.993 [2024-11-27 05:00:15.941974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.942468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.942489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:08.993 [2024-11-27 05:00:15.942496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:38:08.993 [2024-11-27 05:00:15.942502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.986001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.986044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:08.993 [2024-11-27 05:00:15.986055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.483 ms 00:38:08.993 [2024-11-27 05:00:15.986061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.994267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:08.993 [2024-11-27 05:00:15.996361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.996380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:08.993 [2024-11-27 05:00:15.996390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.248 ms 00:38:08.993 [2024-11-27 05:00:15.996400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.996467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.996476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:08.993 [2024-11-27 05:00:15.996484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:08.993 [2024-11-27 05:00:15.996490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.996541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.996549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:08.993 [2024-11-27 05:00:15.996556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:38:08.993 [2024-11-27 05:00:15.996562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.996580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.996587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:08.993 [2024-11-27 05:00:15.996593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:08.993 [2024-11-27 05:00:15.996599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:15.996623] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:08.993 [2024-11-27 05:00:15.996630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:15.996637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:08.993 [2024-11-27 05:00:15.996643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:08.993 [2024-11-27 05:00:15.996651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:16.014411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:16.014442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:08.993 [2024-11-27 05:00:16.014451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:38:08.993 [2024-11-27 05:00:16.014457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:16.014517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:08.993 [2024-11-27 05:00:16.014525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:08.993 [2024-11-27 05:00:16.014531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:38:08.993 [2024-11-27 05:00:16.014537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:08.993 [2024-11-27 05:00:16.015287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.125 ms, result 0 00:38:09.925  [2024-11-27T05:00:18.062Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-27T05:00:19.434Z] Copying: 75/1024 [MB] (28 MBps) [2024-11-27T05:00:20.371Z] Copying: 101/1024 [MB] (26 MBps) [2024-11-27T05:00:21.311Z] Copying: 144/1024 [MB] (42 MBps) [2024-11-27T05:00:22.252Z] Copying: 155/1024 [MB] (11 MBps) [2024-11-27T05:00:23.189Z] Copying: 167/1024 [MB] (12 MBps) [2024-11-27T05:00:24.120Z] Copying: 190/1024 [MB] (22 MBps) [2024-11-27T05:00:25.055Z] Copying: 224/1024 [MB] (34 MBps) [2024-11-27T05:00:26.436Z] Copying: 245/1024 [MB] (20 MBps) [2024-11-27T05:00:27.400Z] Copying: 259/1024 [MB] (14 MBps) [2024-11-27T05:00:28.347Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-27T05:00:29.280Z] Copying: 285/1024 [MB] (13 MBps) [2024-11-27T05:00:30.210Z] Copying: 311/1024 [MB] (25 MBps) [2024-11-27T05:00:31.140Z] Copying: 340/1024 [MB] (29 MBps) [2024-11-27T05:00:32.069Z] Copying: 380/1024 [MB] (40 MBps) [2024-11-27T05:00:33.492Z] Copying: 425/1024 [MB] (44 MBps) [2024-11-27T05:00:34.125Z] Copying: 462/1024 [MB] (36 MBps) [2024-11-27T05:00:35.068Z] Copying: 478/1024 [MB] (16 MBps) [2024-11-27T05:00:36.454Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-27T05:00:37.393Z] Copying: 504/1024 [MB] (15 MBps) [2024-11-27T05:00:38.329Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-27T05:00:39.260Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-27T05:00:40.200Z] Copying: 572/1024 [MB] (43 MBps) [2024-11-27T05:00:41.145Z] Copying: 600/1024 [MB] (27 MBps) [2024-11-27T05:00:42.088Z] Copying: 619/1024 [MB] (18 MBps) [2024-11-27T05:00:43.031Z] Copying: 639/1024 [MB] (20 MBps) [2024-11-27T05:00:44.403Z] Copying: 663/1024 [MB] (23 MBps) [2024-11-27T05:00:45.333Z] Copying: 707/1024 [MB] (44 MBps) [2024-11-27T05:00:46.268Z] Copying: 753/1024 [MB] (45 MBps) [2024-11-27T05:00:47.207Z] Copying: 796/1024 [MB] (43 MBps) [2024-11-27T05:00:48.140Z] Copying: 813/1024 [MB] (16 MBps) [2024-11-27T05:00:49.083Z] Copying: 844/1024 [MB] (31 MBps) [2024-11-27T05:00:50.470Z] Copying: 862/1024 [MB] (18 MBps) [2024-11-27T05:00:51.044Z] Copying: 879/1024 [MB] (17 MBps) [2024-11-27T05:00:52.424Z] Copying: 899/1024 [MB] (19 MBps) [2024-11-27T05:00:53.366Z] Copying: 921/1024 [MB] (21 MBps) [2024-11-27T05:00:54.309Z] Copying: 937/1024 [MB] (16 MBps) [2024-11-27T05:00:55.248Z] Copying: 953/1024 [MB] (15 MBps) [2024-11-27T05:00:56.182Z] Copying: 974/1024 [MB] (21 MBps) [2024-11-27T05:00:57.122Z] Copying: 1018/1024 [MB] (44 MBps) [2024-11-27T05:00:57.122Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-27 05:00:56.957703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:56.958101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:49.919 [2024-11-27 05:00:56.958190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:49.919 [2024-11-27 05:00:56.958217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:56.961120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:49.919 [2024-11-27 05:00:56.964868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:56.964984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:49.919 [2024-11-27 05:00:56.965041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:38:49.919 [2024-11-27 05:00:56.965090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:56.977901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:56.978039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:49.919 [2024-11-27 05:00:56.978116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.770 ms 00:38:49.919 [2024-11-27 05:00:56.978142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:56.999751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:56.999870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:49.919 [2024-11-27 05:00:56.999926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.578 ms 00:38:49.919 [2024-11-27 05:00:56.999949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:57.006154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:57.006255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:49.919 [2024-11-27 05:00:57.006303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:38:49.919 [2024-11-27 05:00:57.006325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:57.031007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:57.031133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:49.919 [2024-11-27 05:00:57.031188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.619 ms 00:38:49.919 [2024-11-27 05:00:57.031210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.919 [2024-11-27 05:00:57.045967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.919 [2024-11-27 05:00:57.046124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:49.919 [2024-11-27 05:00:57.046185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:38:49.919 [2024-11-27 05:00:57.046209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.180 [2024-11-27 05:00:57.360844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.180 [2024-11-27 05:00:57.361005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:50.180 [2024-11-27 05:00:57.361097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 314.584 ms 00:38:50.180 [2024-11-27 05:00:57.361125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.443 [2024-11-27 05:00:57.386870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.443 [2024-11-27 05:00:57.387035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:50.444 [2024-11-27 05:00:57.387124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.710 ms 00:38:50.444 [2024-11-27 05:00:57.387163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.444 [2024-11-27 05:00:57.412455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.444 [2024-11-27 05:00:57.412590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:50.444 [2024-11-27 05:00:57.412651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.244 ms 00:38:50.444 [2024-11-27 05:00:57.412674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.444 [2024-11-27 05:00:57.437454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.444 [2024-11-27 05:00:57.437613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:50.444 [2024-11-27 05:00:57.437685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.734 ms 00:38:50.444 [2024-11-27 05:00:57.437708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.444 [2024-11-27 05:00:57.462454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.444 [2024-11-27 05:00:57.462625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:50.444 [2024-11-27 05:00:57.462696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.592 ms 00:38:50.444 [2024-11-27 05:00:57.462719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.444 [2024-11-27 05:00:57.462763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:50.444 [2024-11-27 05:00:57.462791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113408 / 261120 wr_cnt: 1 state: open 00:38:50.444 [2024-11-27 05:00:57.462823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.462853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.462882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.462951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.462983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.463993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:50.444 [2024-11-27 05:00:57.464182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:50.445 [2024-11-27 05:00:57.464424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:50.445 [2024-11-27 05:00:57.464433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbf2820b-ef9f-4f2f-b29c-d3b26af8645f 00:38:50.445 [2024-11-27 05:00:57.464453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113408 00:38:50.445 [2024-11-27 05:00:57.464462] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114368 00:38:50.445 [2024-11-27 05:00:57.464470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113408 00:38:50.445 [2024-11-27 05:00:57.464479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:38:50.445 [2024-11-27 05:00:57.464488] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:50.445 [2024-11-27 05:00:57.464497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:50.445 [2024-11-27 05:00:57.464505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:50.445 [2024-11-27 05:00:57.464512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:50.445 [2024-11-27 05:00:57.464518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:50.445 [2024-11-27 05:00:57.464526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.445 [2024-11-27 05:00:57.464535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:50.445 [2024-11-27 05:00:57.464543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:38:50.445 [2024-11-27 05:00:57.464551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.478315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.445 [2024-11-27 05:00:57.478453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:50.445 [2024-11-27 05:00:57.478508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.741 ms 00:38:50.445 [2024-11-27 05:00:57.478531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.478943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:50.445 [2024-11-27 05:00:57.478978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:50.445 [2024-11-27 05:00:57.479051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:38:50.445 [2024-11-27 05:00:57.479105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.515316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.445 [2024-11-27 05:00:57.515475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:50.445 [2024-11-27 05:00:57.515533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.445 [2024-11-27 05:00:57.515556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.515629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.445 [2024-11-27 05:00:57.515652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:50.445 [2024-11-27 05:00:57.515680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.445 [2024-11-27 05:00:57.515700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.515779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.445 [2024-11-27 05:00:57.515864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:50.445 [2024-11-27 05:00:57.515884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.445 [2024-11-27 05:00:57.515904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.515932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.445 [2024-11-27 05:00:57.515953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:50.445 [2024-11-27 05:00:57.515972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.445 [2024-11-27 05:00:57.516033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.445 [2024-11-27 05:00:57.600651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.445 [2024-11-27 05:00:57.600865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:50.445 [2024-11-27 05:00:57.600928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.445 [2024-11-27 05:00:57.600952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.669242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.669455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:50.706 [2024-11-27 05:00:57.669515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.669547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.669645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.669670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:50.706 [2024-11-27 05:00:57.669691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.669711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.669759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.669781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:50.706 [2024-11-27 05:00:57.669803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.669870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.669985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.669997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:50.706 [2024-11-27 05:00:57.670007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.670015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.670048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.670058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:50.706 [2024-11-27 05:00:57.670091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.670101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.670145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.670156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:50.706 [2024-11-27 05:00:57.670164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.670172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.706 [2024-11-27 05:00:57.670220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:50.706 [2024-11-27 05:00:57.670231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:50.706 [2024-11-27 05:00:57.670240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:50.706 [2024-11-27 05:00:57.670248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:50.707 [2024-11-27 05:00:57.670382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 715.064 ms, result 0 00:38:52.090 00:38:52.090 00:38:52.090 05:00:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:38:53.466 05:01:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:53.726 [2024-11-27 05:01:00.699946] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:38:53.726 [2024-11-27 05:01:00.700094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81538 ] 00:38:53.726 [2024-11-27 05:01:00.860954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.041 [2024-11-27 05:01:00.983005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.342 [2024-11-27 05:01:01.281164] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:54.342 [2024-11-27 05:01:01.281259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:54.342 [2024-11-27 05:01:01.442969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.342 [2024-11-27 05:01:01.443039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:54.342 [2024-11-27 05:01:01.443055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:54.342 [2024-11-27 05:01:01.443078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.342 [2024-11-27 05:01:01.443134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.342 [2024-11-27 05:01:01.443148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:54.342 [2024-11-27 05:01:01.443157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:38:54.342 [2024-11-27 05:01:01.443165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.342 [2024-11-27 05:01:01.443187] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:54.342 [2024-11-27 05:01:01.443923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:54.342 [2024-11-27 05:01:01.443941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.443950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:54.343 [2024-11-27 05:01:01.443960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:38:54.343 [2024-11-27 05:01:01.443968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.446095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:54.343 [2024-11-27 05:01:01.460465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.460517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:54.343 [2024-11-27 05:01:01.460534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.373 ms 00:38:54.343 [2024-11-27 05:01:01.460543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.460627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.460638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:54.343 [2024-11-27 05:01:01.460647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:38:54.343 [2024-11-27 05:01:01.460655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.469227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.469268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:54.343 [2024-11-27 05:01:01.469279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.494 ms 00:38:54.343 [2024-11-27 05:01:01.469293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.469389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.469399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:54.343 [2024-11-27 05:01:01.469408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:38:54.343 [2024-11-27 05:01:01.469416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.469460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.469470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:54.343 [2024-11-27 05:01:01.469479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:54.343 [2024-11-27 05:01:01.469487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.469514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:54.343 [2024-11-27 05:01:01.473387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.473430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:54.343 [2024-11-27 05:01:01.473444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:38:54.343 [2024-11-27 05:01:01.473452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.473487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.473496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:54.343 [2024-11-27 05:01:01.473505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:54.343 [2024-11-27 05:01:01.473513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.473563] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:54.343 [2024-11-27 05:01:01.473587] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:54.343 [2024-11-27 05:01:01.473626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:54.343 [2024-11-27 05:01:01.473645] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:54.343 [2024-11-27 05:01:01.473751] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:54.343 [2024-11-27 05:01:01.473763] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:54.343 [2024-11-27 05:01:01.473775] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:54.343 [2024-11-27 05:01:01.473785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:54.343 [2024-11-27 05:01:01.473795] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:54.343 [2024-11-27 05:01:01.473804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:54.343 [2024-11-27 05:01:01.473811] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:54.343 [2024-11-27 05:01:01.473822] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:54.343 [2024-11-27 05:01:01.473830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:54.343 [2024-11-27 05:01:01.473838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.473847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:54.343 [2024-11-27 05:01:01.473855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:38:54.343 [2024-11-27 05:01:01.473863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.473946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.343 [2024-11-27 05:01:01.473954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:54.343 [2024-11-27 05:01:01.473961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:38:54.343 [2024-11-27 05:01:01.473968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.343 [2024-11-27 05:01:01.474090] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:54.343 [2024-11-27 05:01:01.474103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:54.343 [2024-11-27 05:01:01.474112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:54.343 [2024-11-27 05:01:01.474134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:54.343 [2024-11-27 05:01:01.474156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.343 [2024-11-27 05:01:01.474169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:54.343 [2024-11-27 05:01:01.474176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:54.343 [2024-11-27 05:01:01.474182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.343 [2024-11-27 05:01:01.474199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:54.343 [2024-11-27 05:01:01.474207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:54.343 [2024-11-27 05:01:01.474214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:54.343 [2024-11-27 05:01:01.474229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:54.343 [2024-11-27 05:01:01.474249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:54.343 [2024-11-27 05:01:01.474270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:54.343 [2024-11-27 05:01:01.474290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:54.343 [2024-11-27 05:01:01.474310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:54.343 [2024-11-27 05:01:01.474330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.343 [2024-11-27 05:01:01.474343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:54.343 [2024-11-27 05:01:01.474350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:54.343 [2024-11-27 05:01:01.474357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.343 [2024-11-27 05:01:01.474363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:54.343 [2024-11-27 05:01:01.474370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:54.343 [2024-11-27 05:01:01.474377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:54.343 [2024-11-27 05:01:01.474390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:54.343 [2024-11-27 05:01:01.474396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:54.343 [2024-11-27 05:01:01.474410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:54.343 [2024-11-27 05:01:01.474419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.343 [2024-11-27 05:01:01.474435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:54.343 [2024-11-27 05:01:01.474442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:54.343 [2024-11-27 05:01:01.474449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:54.343 [2024-11-27 05:01:01.474456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:54.343 [2024-11-27 05:01:01.474462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:54.343 [2024-11-27 05:01:01.474470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:54.343 [2024-11-27 05:01:01.474479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:54.343 [2024-11-27 05:01:01.474488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.343 [2024-11-27 05:01:01.474507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:54.343 [2024-11-27 05:01:01.474515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:54.343 [2024-11-27 05:01:01.474522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:54.343 [2024-11-27 05:01:01.474529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:54.343 [2024-11-27 05:01:01.474536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:54.343 [2024-11-27 05:01:01.474543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:54.343 [2024-11-27 05:01:01.474550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:54.343 [2024-11-27 05:01:01.474557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:54.343 [2024-11-27 05:01:01.474564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:54.344 [2024-11-27 05:01:01.474571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:54.344 [2024-11-27 05:01:01.474608] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:54.344 [2024-11-27 05:01:01.474616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:54.344 [2024-11-27 05:01:01.474632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:54.344 [2024-11-27 05:01:01.474639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:54.344 [2024-11-27 05:01:01.474646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:54.344 [2024-11-27 05:01:01.474652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.344 [2024-11-27 05:01:01.474660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:54.344 [2024-11-27 05:01:01.474670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:38:54.344 [2024-11-27 05:01:01.474679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.344 [2024-11-27 05:01:01.506413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.344 [2024-11-27 05:01:01.506465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:54.344 [2024-11-27 05:01:01.506476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.689 ms 00:38:54.344 [2024-11-27 05:01:01.506489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.344 [2024-11-27 05:01:01.506579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.344 [2024-11-27 05:01:01.506587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:54.344 [2024-11-27 05:01:01.506596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:38:54.344 [2024-11-27 05:01:01.506605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.555732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.555791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:54.605 [2024-11-27 05:01:01.555804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.068 ms 00:38:54.605 [2024-11-27 05:01:01.555814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.555862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.555873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:54.605 [2024-11-27 05:01:01.555886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:54.605 [2024-11-27 05:01:01.555895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.556512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.556552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:54.605 [2024-11-27 05:01:01.556564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:38:54.605 [2024-11-27 05:01:01.556572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.556794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.556803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:54.605 [2024-11-27 05:01:01.556817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:38:54.605 [2024-11-27 05:01:01.556826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.572509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.572558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:54.605 [2024-11-27 05:01:01.572570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:38:54.605 [2024-11-27 05:01:01.572578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.586767] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:38:54.605 [2024-11-27 05:01:01.586818] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:54.605 [2024-11-27 05:01:01.586832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.586841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:54.605 [2024-11-27 05:01:01.586851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.144 ms 00:38:54.605 [2024-11-27 05:01:01.586859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.612634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.612685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:54.605 [2024-11-27 05:01:01.612698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.719 ms 00:38:54.605 [2024-11-27 05:01:01.612706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.625621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.625671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:54.605 [2024-11-27 05:01:01.625682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:38:54.605 [2024-11-27 05:01:01.625690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.638427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.638475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:54.605 [2024-11-27 05:01:01.638487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.691 ms 00:38:54.605 [2024-11-27 05:01:01.638495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.639157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.639187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:54.605 [2024-11-27 05:01:01.639202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:38:54.605 [2024-11-27 05:01:01.639210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.703495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.703561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:54.605 [2024-11-27 05:01:01.703582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.265 ms 00:38:54.605 [2024-11-27 05:01:01.703592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.714844] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:54.605 [2024-11-27 05:01:01.717762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.717805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:54.605 [2024-11-27 05:01:01.717818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.114 ms 00:38:54.605 [2024-11-27 05:01:01.717826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.717910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.717921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:54.605 [2024-11-27 05:01:01.717934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:38:54.605 [2024-11-27 05:01:01.717942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.719764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.719812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:54.605 [2024-11-27 05:01:01.719822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:38:54.605 [2024-11-27 05:01:01.719830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.719858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.719868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:54.605 [2024-11-27 05:01:01.719877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:54.605 [2024-11-27 05:01:01.719885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.719930] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:54.605 [2024-11-27 05:01:01.719941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.719950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:54.605 [2024-11-27 05:01:01.719958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:54.605 [2024-11-27 05:01:01.719967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.745292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.745352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:54.605 [2024-11-27 05:01:01.745372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.306 ms 00:38:54.605 [2024-11-27 05:01:01.745381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.745469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.605 [2024-11-27 05:01:01.745481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:54.605 [2024-11-27 05:01:01.745490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:38:54.605 [2024-11-27 05:01:01.745498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.605 [2024-11-27 05:01:01.746802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.347 ms, result 0 00:38:55.993  [2024-11-27T05:01:04.142Z] Copying: 1084/1048576 [kB] (1084 kBps) [2024-11-27T05:01:05.087Z] Copying: 4632/1048576 [kB] (3548 kBps) [2024-11-27T05:01:06.031Z] Copying: 15/1024 [MB] (10 MBps) [2024-11-27T05:01:06.978Z] Copying: 36/1024 [MB] (21 MBps) [2024-11-27T05:01:08.366Z] Copying: 65/1024 [MB] (28 MBps) [2024-11-27T05:01:08.939Z] Copying: 90/1024 [MB] (24 MBps) [2024-11-27T05:01:10.328Z] Copying: 106/1024 [MB] (16 MBps) [2024-11-27T05:01:11.266Z] Copying: 122/1024 [MB] (15 MBps) [2024-11-27T05:01:12.202Z] Copying: 143/1024 [MB] (21 MBps) [2024-11-27T05:01:13.147Z] Copying: 176/1024 [MB] (32 MBps) [2024-11-27T05:01:14.093Z] Copying: 198/1024 [MB] (21 MBps) [2024-11-27T05:01:15.033Z] Copying: 214/1024 [MB] (15 MBps) [2024-11-27T05:01:15.975Z] Copying: 238/1024 [MB] (24 MBps) [2024-11-27T05:01:17.362Z] Copying: 258/1024 [MB] (19 MBps) [2024-11-27T05:01:18.306Z] Copying: 284/1024 [MB] (25 MBps) [2024-11-27T05:01:19.249Z] Copying: 312/1024 [MB] (28 MBps) [2024-11-27T05:01:20.184Z] Copying: 340/1024 [MB] (28 MBps) [2024-11-27T05:01:21.125Z] Copying: 386/1024 [MB] (45 MBps) [2024-11-27T05:01:22.069Z] Copying: 402/1024 [MB] (16 MBps) [2024-11-27T05:01:23.014Z] Copying: 424/1024 [MB] (21 MBps) [2024-11-27T05:01:23.959Z] Copying: 453/1024 [MB] (29 MBps) [2024-11-27T05:01:25.346Z] Copying: 484/1024 [MB] (30 MBps) [2024-11-27T05:01:26.287Z] Copying: 509/1024 [MB] (25 MBps) [2024-11-27T05:01:27.230Z] Copying: 525/1024 [MB] (15 MBps) [2024-11-27T05:01:28.176Z] Copying: 550/1024 [MB] (24 MBps) [2024-11-27T05:01:29.121Z] Copying: 567/1024 [MB] (17 MBps) [2024-11-27T05:01:30.080Z] Copying: 592/1024 [MB] (25 MBps) [2024-11-27T05:01:31.083Z] Copying: 618/1024 [MB] (25 MBps) [2024-11-27T05:01:32.025Z] Copying: 649/1024 [MB] (30 MBps) [2024-11-27T05:01:32.967Z] Copying: 665/1024 [MB] (15 MBps) [2024-11-27T05:01:34.347Z] Copying: 681/1024 [MB] (16 MBps) [2024-11-27T05:01:35.290Z] Copying: 699/1024 [MB] (18 MBps) [2024-11-27T05:01:36.231Z] Copying: 720/1024 [MB] (20 MBps) [2024-11-27T05:01:37.172Z] Copying: 735/1024 [MB] (15 MBps) [2024-11-27T05:01:38.114Z] Copying: 756/1024 [MB] (20 MBps) [2024-11-27T05:01:39.055Z] Copying: 784/1024 [MB] (28 MBps) [2024-11-27T05:01:40.007Z] Copying: 814/1024 [MB] (30 MBps) [2024-11-27T05:01:40.945Z] Copying: 839/1024 [MB] (24 MBps) [2024-11-27T05:01:42.327Z] Copying: 864/1024 [MB] (24 MBps) [2024-11-27T05:01:43.271Z] Copying: 891/1024 [MB] (27 MBps) [2024-11-27T05:01:44.213Z] Copying: 915/1024 [MB] (24 MBps) [2024-11-27T05:01:45.156Z] Copying: 953/1024 [MB] (37 MBps) [2024-11-27T05:01:46.100Z] Copying: 985/1024 [MB] (31 MBps) [2024-11-27T05:01:46.362Z] Copying: 1012/1024 [MB] (27 MBps) [2024-11-27T05:01:46.933Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-27 05:01:46.802195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.802316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:39.730 [2024-11-27 05:01:46.802342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:39.730 [2024-11-27 05:01:46.802358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.802401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:39.730 [2024-11-27 05:01:46.807104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.807161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:39.730 [2024-11-27 05:01:46.807173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:39:39.730 [2024-11-27 05:01:46.807181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.807433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.807448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:39.730 [2024-11-27 05:01:46.807458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:39:39.730 [2024-11-27 05:01:46.807467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.820091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.820164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:39.730 [2024-11-27 05:01:46.820178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:39:39.730 [2024-11-27 05:01:46.820186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.826407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.826453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:39.730 [2024-11-27 05:01:46.826475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:39:39.730 [2024-11-27 05:01:46.826484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.854131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.854188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:39.730 [2024-11-27 05:01:46.854201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.583 ms 00:39:39.730 [2024-11-27 05:01:46.854210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.871148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.871199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:39.730 [2024-11-27 05:01:46.871212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.888 ms 00:39:39.730 [2024-11-27 05:01:46.871221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.876444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.876498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:39.730 [2024-11-27 05:01:46.876512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.165 ms 00:39:39.730 [2024-11-27 05:01:46.876531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.902968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.903018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:39.730 [2024-11-27 05:01:46.903032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.420 ms 00:39:39.730 [2024-11-27 05:01:46.903039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.730 [2024-11-27 05:01:46.929343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.730 [2024-11-27 05:01:46.929392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:39.730 [2024-11-27 05:01:46.929404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.235 ms 00:39:39.730 [2024-11-27 05:01:46.929412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.993 [2024-11-27 05:01:46.955190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.993 [2024-11-27 05:01:46.955243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:39.993 [2024-11-27 05:01:46.955257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.727 ms 00:39:39.993 [2024-11-27 05:01:46.955264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.993 [2024-11-27 05:01:46.981242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.993 [2024-11-27 05:01:46.981295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:39.993 [2024-11-27 05:01:46.981308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.883 ms 00:39:39.993 [2024-11-27 05:01:46.981315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.993 [2024-11-27 05:01:46.981376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:39.993 [2024-11-27 05:01:46.981394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:39.993 [2024-11-27 05:01:46.981405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:39:39.993 [2024-11-27 05:01:46.981414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:39.993 [2024-11-27 05:01:46.981734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.981997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:39.994 [2024-11-27 05:01:46.982202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:39.994 [2024-11-27 05:01:46.982210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbf2820b-ef9f-4f2f-b29c-d3b26af8645f 00:39:39.994 [2024-11-27 05:01:46.982219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:39:39.994 [2024-11-27 05:01:46.982226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 151232 00:39:39.994 [2024-11-27 05:01:46.982240] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 149248 00:39:39.994 [2024-11-27 05:01:46.982249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0133 00:39:39.994 [2024-11-27 05:01:46.982257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:39.994 [2024-11-27 05:01:46.982273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:39.994 [2024-11-27 05:01:46.982280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:39.994 [2024-11-27 05:01:46.982287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:39.994 [2024-11-27 05:01:46.982295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:39.994 [2024-11-27 05:01:46.982303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.994 [2024-11-27 05:01:46.982311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:39.994 [2024-11-27 05:01:46.982320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:39:39.994 [2024-11-27 05:01:46.982328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:46.995947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.994 [2024-11-27 05:01:46.995992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:39.994 [2024-11-27 05:01:46.996004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:39:39.994 [2024-11-27 05:01:46.996012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:46.996429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:39.994 [2024-11-27 05:01:46.996447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:39.994 [2024-11-27 05:01:46.996457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:39:39.994 [2024-11-27 05:01:46.996465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:47.033219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.994 [2024-11-27 05:01:47.033273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:39.994 [2024-11-27 05:01:47.033284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.994 [2024-11-27 05:01:47.033292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:47.033384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.994 [2024-11-27 05:01:47.033394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:39.994 [2024-11-27 05:01:47.033404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.994 [2024-11-27 05:01:47.033415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:47.033519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.994 [2024-11-27 05:01:47.033530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:39.994 [2024-11-27 05:01:47.033540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.994 [2024-11-27 05:01:47.033547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:47.033563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.994 [2024-11-27 05:01:47.033572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:39.994 [2024-11-27 05:01:47.033580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.994 [2024-11-27 05:01:47.033588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.994 [2024-11-27 05:01:47.120791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.994 [2024-11-27 05:01:47.120852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:39.994 [2024-11-27 05:01:47.120867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.994 [2024-11-27 05:01:47.120876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.191827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.191887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:39.995 [2024-11-27 05:01:47.191900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.191909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.191966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.191983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:39.995 [2024-11-27 05:01:47.191992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.192000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.192094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.192106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:39.995 [2024-11-27 05:01:47.192116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.192125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.192224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.192235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:39.995 [2024-11-27 05:01:47.192247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.192255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.192287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.192297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:39.995 [2024-11-27 05:01:47.192306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.192314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:39.995 [2024-11-27 05:01:47.192357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:39.995 [2024-11-27 05:01:47.192367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:39.995 [2024-11-27 05:01:47.192378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:39.995 [2024-11-27 05:01:47.192386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:40.257 [2024-11-27 05:01:47.192431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:40.257 [2024-11-27 05:01:47.192442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:40.257 [2024-11-27 05:01:47.192451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:40.257 [2024-11-27 05:01:47.192459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:40.257 [2024-11-27 05:01:47.192595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.397 ms, result 0 00:39:40.830 00:39:40.830 00:39:40.830 05:01:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:43.394 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:39:43.394 05:01:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:43.394 [2024-11-27 05:01:50.110100] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:39:43.394 [2024-11-27 05:01:50.110296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82035 ] 00:39:43.394 [2024-11-27 05:01:50.264548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.394 [2024-11-27 05:01:50.363285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.657 [2024-11-27 05:01:50.657038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:43.657 [2024-11-27 05:01:50.657141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:43.657 [2024-11-27 05:01:50.816449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.816513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:43.657 [2024-11-27 05:01:50.816528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:43.657 [2024-11-27 05:01:50.816537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.816589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.816604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:43.657 [2024-11-27 05:01:50.816613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:39:43.657 [2024-11-27 05:01:50.816621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.816641] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:43.657 [2024-11-27 05:01:50.817413] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:43.657 [2024-11-27 05:01:50.817442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.817450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:43.657 [2024-11-27 05:01:50.817460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:39:43.657 [2024-11-27 05:01:50.817468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.819146] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:43.657 [2024-11-27 05:01:50.833119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.833170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:43.657 [2024-11-27 05:01:50.833182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.975 ms 00:39:43.657 [2024-11-27 05:01:50.833190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.833266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.833275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:43.657 [2024-11-27 05:01:50.833285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:39:43.657 [2024-11-27 05:01:50.833293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.841162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.841203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:43.657 [2024-11-27 05:01:50.841214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.771 ms 00:39:43.657 [2024-11-27 05:01:50.841227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.841306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.841316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:43.657 [2024-11-27 05:01:50.841335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:39:43.657 [2024-11-27 05:01:50.841343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.841385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.841395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:43.657 [2024-11-27 05:01:50.841404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:43.657 [2024-11-27 05:01:50.841412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.841440] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:43.657 [2024-11-27 05:01:50.845427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.845467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:43.657 [2024-11-27 05:01:50.845480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:39:43.657 [2024-11-27 05:01:50.845489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.657 [2024-11-27 05:01:50.845522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.657 [2024-11-27 05:01:50.845530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:43.657 [2024-11-27 05:01:50.845539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:39:43.657 [2024-11-27 05:01:50.845547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.658 [2024-11-27 05:01:50.845596] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:43.658 [2024-11-27 05:01:50.845620] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:43.658 [2024-11-27 05:01:50.845658] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:43.658 [2024-11-27 05:01:50.845678] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:43.658 [2024-11-27 05:01:50.845787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:43.658 [2024-11-27 05:01:50.845798] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:43.658 [2024-11-27 05:01:50.845809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:43.658 [2024-11-27 05:01:50.845819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:43.658 [2024-11-27 05:01:50.845829] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:43.658 [2024-11-27 05:01:50.845837] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:43.658 [2024-11-27 05:01:50.845845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:43.658 [2024-11-27 05:01:50.845855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:43.658 [2024-11-27 05:01:50.845863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:43.658 [2024-11-27 05:01:50.845870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.658 [2024-11-27 05:01:50.845878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:43.658 [2024-11-27 05:01:50.845887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:39:43.658 [2024-11-27 05:01:50.845894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.658 [2024-11-27 05:01:50.845977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.658 [2024-11-27 05:01:50.845986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:43.658 [2024-11-27 05:01:50.845993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:39:43.658 [2024-11-27 05:01:50.846000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.658 [2024-11-27 05:01:50.846123] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:43.658 [2024-11-27 05:01:50.846136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:43.658 [2024-11-27 05:01:50.846145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:43.658 [2024-11-27 05:01:50.846169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:43.658 [2024-11-27 05:01:50.846191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:43.658 [2024-11-27 05:01:50.846206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:43.658 [2024-11-27 05:01:50.846214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:43.658 [2024-11-27 05:01:50.846221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:43.658 [2024-11-27 05:01:50.846235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:43.658 [2024-11-27 05:01:50.846244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:43.658 [2024-11-27 05:01:50.846251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:43.658 [2024-11-27 05:01:50.846265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:43.658 [2024-11-27 05:01:50.846286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:43.658 [2024-11-27 05:01:50.846305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:43.658 [2024-11-27 05:01:50.846325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:43.658 [2024-11-27 05:01:50.846345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:43.658 [2024-11-27 05:01:50.846365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:43.658 [2024-11-27 05:01:50.846378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:43.658 [2024-11-27 05:01:50.846385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:43.658 [2024-11-27 05:01:50.846392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:43.658 [2024-11-27 05:01:50.846399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:43.658 [2024-11-27 05:01:50.846407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:43.658 [2024-11-27 05:01:50.846414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:43.658 [2024-11-27 05:01:50.846427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:43.658 [2024-11-27 05:01:50.846435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846442] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:43.658 [2024-11-27 05:01:50.846450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:43.658 [2024-11-27 05:01:50.846458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:43.658 [2024-11-27 05:01:50.846474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:43.658 [2024-11-27 05:01:50.846482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:43.658 [2024-11-27 05:01:50.846488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:43.658 [2024-11-27 05:01:50.846495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:43.658 [2024-11-27 05:01:50.846503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:43.658 [2024-11-27 05:01:50.846510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:43.658 [2024-11-27 05:01:50.846518] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:43.658 [2024-11-27 05:01:50.846527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:43.658 [2024-11-27 05:01:50.846539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:43.659 [2024-11-27 05:01:50.846546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:43.659 [2024-11-27 05:01:50.846554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:43.659 [2024-11-27 05:01:50.846562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:43.659 [2024-11-27 05:01:50.846569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:43.659 [2024-11-27 05:01:50.846576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:43.659 [2024-11-27 05:01:50.846583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:43.659 [2024-11-27 05:01:50.846593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:43.659 [2024-11-27 05:01:50.846600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:43.659 [2024-11-27 05:01:50.846607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:43.659 [2024-11-27 05:01:50.846645] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:43.659 [2024-11-27 05:01:50.846654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:43.659 [2024-11-27 05:01:50.846670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:43.659 [2024-11-27 05:01:50.846678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:43.659 [2024-11-27 05:01:50.846686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:43.659 [2024-11-27 05:01:50.846694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.659 [2024-11-27 05:01:50.846702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:43.659 [2024-11-27 05:01:50.846710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:39:43.659 [2024-11-27 05:01:50.846718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.920 [2024-11-27 05:01:50.878602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.920 [2024-11-27 05:01:50.878652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:43.920 [2024-11-27 05:01:50.878665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.839 ms 00:39:43.920 [2024-11-27 05:01:50.878678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.920 [2024-11-27 05:01:50.878768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.920 [2024-11-27 05:01:50.878776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:43.920 [2024-11-27 05:01:50.878784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:39:43.920 [2024-11-27 05:01:50.878793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.920 [2024-11-27 05:01:50.927481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.920 [2024-11-27 05:01:50.927534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:43.920 [2024-11-27 05:01:50.927549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.627 ms 00:39:43.920 [2024-11-27 05:01:50.927558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.920 [2024-11-27 05:01:50.927605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.920 [2024-11-27 05:01:50.927616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:43.920 [2024-11-27 05:01:50.927629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:43.920 [2024-11-27 05:01:50.927637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.920 [2024-11-27 05:01:50.928272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.920 [2024-11-27 05:01:50.928305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:43.920 [2024-11-27 05:01:50.928316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:39:43.920 [2024-11-27 05:01:50.928323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:50.928479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:50.928489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:43.921 [2024-11-27 05:01:50.928506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:39:43.921 [2024-11-27 05:01:50.928514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:50.943995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:50.944042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:43.921 [2024-11-27 05:01:50.944053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.460 ms 00:39:43.921 [2024-11-27 05:01:50.944086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:50.958247] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:43.921 [2024-11-27 05:01:50.958296] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:43.921 [2024-11-27 05:01:50.958309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:50.958318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:43.921 [2024-11-27 05:01:50.958328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.116 ms 00:39:43.921 [2024-11-27 05:01:50.958336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:50.984467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:50.984530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:43.921 [2024-11-27 05:01:50.984544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.080 ms 00:39:43.921 [2024-11-27 05:01:50.984552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:50.997371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:50.997417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:43.921 [2024-11-27 05:01:50.997428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.754 ms 00:39:43.921 [2024-11-27 05:01:50.997435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.009908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.009953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:43.921 [2024-11-27 05:01:51.009964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:39:43.921 [2024-11-27 05:01:51.009972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.010622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.010653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:43.921 [2024-11-27 05:01:51.010667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:39:43.921 [2024-11-27 05:01:51.010675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.073579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.073645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:43.921 [2024-11-27 05:01:51.073666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.884 ms 00:39:43.921 [2024-11-27 05:01:51.073675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.084894] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:43.921 [2024-11-27 05:01:51.087781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.087822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:43.921 [2024-11-27 05:01:51.087834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:39:43.921 [2024-11-27 05:01:51.087843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.087927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.087939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:43.921 [2024-11-27 05:01:51.087952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:39:43.921 [2024-11-27 05:01:51.087960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.088785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.088829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:43.921 [2024-11-27 05:01:51.088840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:39:43.921 [2024-11-27 05:01:51.088848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.088876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.088886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:43.921 [2024-11-27 05:01:51.088894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:43.921 [2024-11-27 05:01:51.088902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.088946] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:43.921 [2024-11-27 05:01:51.088957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.088967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:43.921 [2024-11-27 05:01:51.088977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:43.921 [2024-11-27 05:01:51.088985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.114062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.114117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:43.921 [2024-11-27 05:01:51.114136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.057 ms 00:39:43.921 [2024-11-27 05:01:51.114144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.114238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.921 [2024-11-27 05:01:51.114249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:43.921 [2024-11-27 05:01:51.114258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:39:43.921 [2024-11-27 05:01:51.114268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.921 [2024-11-27 05:01:51.115491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.541 ms, result 0 00:39:45.307  [2024-11-27T05:01:53.450Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T05:01:54.396Z] Copying: 42/1024 [MB] (21 MBps) [2024-11-27T05:01:55.339Z] Copying: 56/1024 [MB] (14 MBps) [2024-11-27T05:01:56.726Z] Copying: 73/1024 [MB] (17 MBps) [2024-11-27T05:01:57.299Z] Copying: 89/1024 [MB] (15 MBps) [2024-11-27T05:01:58.687Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-27T05:01:59.688Z] Copying: 121/1024 [MB] (18 MBps) [2024-11-27T05:02:00.632Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-27T05:02:01.577Z] Copying: 144/1024 [MB] (13 MBps) [2024-11-27T05:02:02.520Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-27T05:02:03.463Z] Copying: 174/1024 [MB] (19 MBps) [2024-11-27T05:02:04.437Z] Copying: 193/1024 [MB] (18 MBps) [2024-11-27T05:02:05.382Z] Copying: 208/1024 [MB] (15 MBps) [2024-11-27T05:02:06.323Z] Copying: 223/1024 [MB] (15 MBps) [2024-11-27T05:02:07.707Z] Copying: 239/1024 [MB] (15 MBps) [2024-11-27T05:02:08.647Z] Copying: 256/1024 [MB] (16 MBps) [2024-11-27T05:02:09.588Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-27T05:02:10.530Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-27T05:02:11.474Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-27T05:02:12.420Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-27T05:02:13.361Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-27T05:02:14.306Z] Copying: 339/1024 [MB] (13 MBps) [2024-11-27T05:02:15.693Z] Copying: 351/1024 [MB] (12 MBps) [2024-11-27T05:02:16.641Z] Copying: 369/1024 [MB] (17 MBps) [2024-11-27T05:02:17.585Z] Copying: 389/1024 [MB] (20 MBps) [2024-11-27T05:02:18.526Z] Copying: 407/1024 [MB] (18 MBps) [2024-11-27T05:02:19.466Z] Copying: 426/1024 [MB] (18 MBps) [2024-11-27T05:02:20.412Z] Copying: 446/1024 [MB] (19 MBps) [2024-11-27T05:02:21.357Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-27T05:02:22.304Z] Copying: 474/1024 [MB] (18 MBps) [2024-11-27T05:02:23.690Z] Copying: 493/1024 [MB] (18 MBps) [2024-11-27T05:02:24.633Z] Copying: 509/1024 [MB] (16 MBps) [2024-11-27T05:02:25.577Z] Copying: 531/1024 [MB] (21 MBps) [2024-11-27T05:02:26.522Z] Copying: 553/1024 [MB] (22 MBps) [2024-11-27T05:02:27.526Z] Copying: 572/1024 [MB] (18 MBps) [2024-11-27T05:02:28.476Z] Copying: 590/1024 [MB] (18 MBps) [2024-11-27T05:02:29.428Z] Copying: 608/1024 [MB] (18 MBps) [2024-11-27T05:02:30.371Z] Copying: 625/1024 [MB] (16 MBps) [2024-11-27T05:02:31.316Z] Copying: 638/1024 [MB] (13 MBps) [2024-11-27T05:02:32.705Z] Copying: 651/1024 [MB] (12 MBps) [2024-11-27T05:02:33.649Z] Copying: 668/1024 [MB] (17 MBps) [2024-11-27T05:02:34.594Z] Copying: 685/1024 [MB] (17 MBps) [2024-11-27T05:02:35.539Z] Copying: 704/1024 [MB] (18 MBps) [2024-11-27T05:02:36.485Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-27T05:02:37.430Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-27T05:02:38.373Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-27T05:02:39.326Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-27T05:02:40.715Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-27T05:02:41.659Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-27T05:02:42.606Z] Copying: 782/1024 [MB] (13 MBps) [2024-11-27T05:02:43.551Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-27T05:02:44.498Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-27T05:02:45.443Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-27T05:02:46.386Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-27T05:02:47.328Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-27T05:02:48.717Z] Copying: 858/1024 [MB] (23 MBps) [2024-11-27T05:02:49.662Z] Copying: 871/1024 [MB] (13 MBps) [2024-11-27T05:02:50.609Z] Copying: 882/1024 [MB] (10 MBps) [2024-11-27T05:02:51.553Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-27T05:02:52.498Z] Copying: 905/1024 [MB] (12 MBps) [2024-11-27T05:02:53.440Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-27T05:02:54.380Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-27T05:02:55.325Z] Copying: 940/1024 [MB] (14 MBps) [2024-11-27T05:02:56.760Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-27T05:02:57.346Z] Copying: 966/1024 [MB] (14 MBps) [2024-11-27T05:02:58.737Z] Copying: 981/1024 [MB] (14 MBps) [2024-11-27T05:02:59.310Z] Copying: 999/1024 [MB] (18 MBps) [2024-11-27T05:03:00.699Z] Copying: 1013/1024 [MB] (13 MBps) [2024-11-27T05:03:00.699Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 05:03:00.313422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.313518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:53.496 [2024-11-27 05:03:00.313537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:53.496 [2024-11-27 05:03:00.313547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.313574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:53.496 [2024-11-27 05:03:00.317982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.318051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:53.496 [2024-11-27 05:03:00.318081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:40:53.496 [2024-11-27 05:03:00.318094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.318345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.318358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:53.496 [2024-11-27 05:03:00.318368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:40:53.496 [2024-11-27 05:03:00.318377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.321860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.321888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:53.496 [2024-11-27 05:03:00.321898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:40:53.496 [2024-11-27 05:03:00.321911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.328237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.328285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:53.496 [2024-11-27 05:03:00.328297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.305 ms 00:40:53.496 [2024-11-27 05:03:00.328305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.356012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.356080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:53.496 [2024-11-27 05:03:00.356094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.629 ms 00:40:53.496 [2024-11-27 05:03:00.356102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.373222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.373280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:53.496 [2024-11-27 05:03:00.373293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.065 ms 00:40:53.496 [2024-11-27 05:03:00.373302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.378384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.378436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:53.496 [2024-11-27 05:03:00.378447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.001 ms 00:40:53.496 [2024-11-27 05:03:00.378456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.405272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.405328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:53.496 [2024-11-27 05:03:00.405341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.798 ms 00:40:53.496 [2024-11-27 05:03:00.405362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.434588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.434647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:53.496 [2024-11-27 05:03:00.434660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.175 ms 00:40:53.496 [2024-11-27 05:03:00.434668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.461163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.461224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:53.496 [2024-11-27 05:03:00.461237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.438 ms 00:40:53.496 [2024-11-27 05:03:00.461246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.486967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.496 [2024-11-27 05:03:00.487020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:53.496 [2024-11-27 05:03:00.487032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:40:53.496 [2024-11-27 05:03:00.487039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.496 [2024-11-27 05:03:00.487096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:53.496 [2024-11-27 05:03:00.487121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:40:53.496 [2024-11-27 05:03:00.487135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:40:53.496 [2024-11-27 05:03:00.487144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:53.496 [2024-11-27 05:03:00.487152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:53.496 [2024-11-27 05:03:00.487161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:53.497 [2024-11-27 05:03:00.487807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:53.498 [2024-11-27 05:03:00.487922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:53.498 [2024-11-27 05:03:00.487930] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbf2820b-ef9f-4f2f-b29c-d3b26af8645f 00:40:53.498 [2024-11-27 05:03:00.487939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:40:53.498 [2024-11-27 05:03:00.487947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:53.498 [2024-11-27 05:03:00.487954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:53.498 [2024-11-27 05:03:00.487963] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:53.498 [2024-11-27 05:03:00.487979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:53.498 [2024-11-27 05:03:00.487987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:53.498 [2024-11-27 05:03:00.487995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:53.498 [2024-11-27 05:03:00.488003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:53.498 [2024-11-27 05:03:00.488010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:53.498 [2024-11-27 05:03:00.488018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.498 [2024-11-27 05:03:00.488026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:53.498 [2024-11-27 05:03:00.488036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:40:53.498 [2024-11-27 05:03:00.488046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.502108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.498 [2024-11-27 05:03:00.502155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:53.498 [2024-11-27 05:03:00.502166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.030 ms 00:40:53.498 [2024-11-27 05:03:00.502174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.502583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.498 [2024-11-27 05:03:00.502608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:53.498 [2024-11-27 05:03:00.502619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:40:53.498 [2024-11-27 05:03:00.502627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.539685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.498 [2024-11-27 05:03:00.539742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:53.498 [2024-11-27 05:03:00.539754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.498 [2024-11-27 05:03:00.539764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.539827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.498 [2024-11-27 05:03:00.539843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:53.498 [2024-11-27 05:03:00.539853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.498 [2024-11-27 05:03:00.539863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.539953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.498 [2024-11-27 05:03:00.539966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:53.498 [2024-11-27 05:03:00.539975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.498 [2024-11-27 05:03:00.539984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.540001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.498 [2024-11-27 05:03:00.540010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:53.498 [2024-11-27 05:03:00.540024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.498 [2024-11-27 05:03:00.540033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.498 [2024-11-27 05:03:00.627155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.498 [2024-11-27 05:03:00.627225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:53.498 [2024-11-27 05:03:00.627239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.498 [2024-11-27 05:03:00.627249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.697580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.697652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:53.760 [2024-11-27 05:03:00.697665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.697673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.697737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.697748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:53.760 [2024-11-27 05:03:00.697758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.697766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.697823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.697835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:53.760 [2024-11-27 05:03:00.697844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.697856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.697956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.697966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:53.760 [2024-11-27 05:03:00.697975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.697983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.698018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.698029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:53.760 [2024-11-27 05:03:00.698037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.698044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.698112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.698123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:53.760 [2024-11-27 05:03:00.698132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.698140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.698186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:53.760 [2024-11-27 05:03:00.698197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:53.760 [2024-11-27 05:03:00.698206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:53.760 [2024-11-27 05:03:00.698217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.760 [2024-11-27 05:03:00.698357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.899 ms, result 0 00:40:54.333 00:40:54.333 00:40:54.333 05:03:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:40:56.883 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:40:56.883 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80302 00:40:56.884 05:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80302 ']' 00:40:56.884 05:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80302 00:40:56.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80302) - No such process 00:40:56.884 Process with pid 80302 is not found 00:40:56.884 05:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80302 is not found' 00:40:56.884 05:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:40:57.145 Remove shared memory files 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:40:57.145 00:40:57.145 real 3m57.344s 00:40:57.145 user 4m20.605s 00:40:57.145 sys 0m26.130s 00:40:57.145 ************************************ 00:40:57.145 END TEST ftl_dirty_shutdown 00:40:57.145 ************************************ 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:57.145 05:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 05:03:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:40:57.145 05:03:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:57.145 05:03:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:57.145 05:03:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 ************************************ 00:40:57.145 START TEST ftl_upgrade_shutdown 00:40:57.145 ************************************ 00:40:57.145 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:40:57.407 * Looking for test storage... 00:40:57.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:57.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.407 --rc genhtml_branch_coverage=1 00:40:57.407 --rc genhtml_function_coverage=1 00:40:57.407 --rc genhtml_legend=1 00:40:57.407 --rc geninfo_all_blocks=1 00:40:57.407 --rc geninfo_unexecuted_blocks=1 00:40:57.407 00:40:57.407 ' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:57.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.407 --rc genhtml_branch_coverage=1 00:40:57.407 --rc genhtml_function_coverage=1 00:40:57.407 --rc genhtml_legend=1 00:40:57.407 --rc geninfo_all_blocks=1 00:40:57.407 --rc geninfo_unexecuted_blocks=1 00:40:57.407 00:40:57.407 ' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:57.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.407 --rc genhtml_branch_coverage=1 00:40:57.407 --rc genhtml_function_coverage=1 00:40:57.407 --rc genhtml_legend=1 00:40:57.407 --rc geninfo_all_blocks=1 00:40:57.407 --rc geninfo_unexecuted_blocks=1 00:40:57.407 00:40:57.407 ' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:57.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.407 --rc genhtml_branch_coverage=1 00:40:57.407 --rc genhtml_function_coverage=1 00:40:57.407 --rc genhtml_legend=1 00:40:57.407 --rc geninfo_all_blocks=1 00:40:57.407 --rc geninfo_unexecuted_blocks=1 00:40:57.407 00:40:57.407 ' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:40:57.407 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82858 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82858 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82858 ']' 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:57.408 05:03:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:40:57.669 [2024-11-27 05:03:04.615413] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:40:57.669 [2024-11-27 05:03:04.615560] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82858 ] 00:40:57.669 [2024-11-27 05:03:04.780295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.930 [2024-11-27 05:03:04.899189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:40:58.501 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:58.762 05:03:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:59.023 { 00:40:59.023 "name": "basen1", 00:40:59.023 "aliases": [ 00:40:59.023 "3ed1f059-3920-42e9-94aa-5edff4b51d1f" 00:40:59.023 ], 00:40:59.023 "product_name": "NVMe disk", 00:40:59.023 "block_size": 4096, 00:40:59.023 "num_blocks": 1310720, 00:40:59.023 "uuid": "3ed1f059-3920-42e9-94aa-5edff4b51d1f", 00:40:59.023 "numa_id": -1, 00:40:59.023 "assigned_rate_limits": { 00:40:59.023 "rw_ios_per_sec": 0, 00:40:59.023 "rw_mbytes_per_sec": 0, 00:40:59.023 "r_mbytes_per_sec": 0, 00:40:59.023 "w_mbytes_per_sec": 0 00:40:59.023 }, 00:40:59.023 "claimed": true, 00:40:59.023 "claim_type": "read_many_write_one", 00:40:59.023 "zoned": false, 00:40:59.023 "supported_io_types": { 00:40:59.023 "read": true, 00:40:59.023 "write": true, 00:40:59.023 "unmap": true, 00:40:59.023 "flush": true, 00:40:59.023 "reset": true, 00:40:59.023 "nvme_admin": true, 00:40:59.023 "nvme_io": true, 00:40:59.023 "nvme_io_md": false, 00:40:59.023 "write_zeroes": true, 00:40:59.023 "zcopy": false, 00:40:59.023 "get_zone_info": false, 00:40:59.023 "zone_management": false, 00:40:59.023 "zone_append": false, 00:40:59.023 "compare": true, 00:40:59.023 "compare_and_write": false, 00:40:59.023 "abort": true, 00:40:59.023 "seek_hole": false, 00:40:59.023 "seek_data": false, 00:40:59.023 "copy": true, 00:40:59.023 "nvme_iov_md": false 00:40:59.023 }, 00:40:59.023 "driver_specific": { 00:40:59.023 "nvme": [ 00:40:59.023 { 00:40:59.023 "pci_address": "0000:00:11.0", 00:40:59.023 "trid": { 00:40:59.023 "trtype": "PCIe", 00:40:59.023 "traddr": "0000:00:11.0" 00:40:59.023 }, 00:40:59.023 "ctrlr_data": { 00:40:59.023 "cntlid": 0, 00:40:59.023 "vendor_id": "0x1b36", 00:40:59.023 "model_number": "QEMU NVMe Ctrl", 00:40:59.023 "serial_number": "12341", 00:40:59.023 "firmware_revision": "8.0.0", 00:40:59.023 "subnqn": "nqn.2019-08.org.qemu:12341", 00:40:59.023 "oacs": { 00:40:59.023 "security": 0, 00:40:59.023 "format": 1, 00:40:59.023 "firmware": 0, 00:40:59.023 "ns_manage": 1 00:40:59.023 }, 00:40:59.023 "multi_ctrlr": false, 00:40:59.023 "ana_reporting": false 00:40:59.023 }, 00:40:59.023 "vs": { 00:40:59.023 "nvme_version": "1.4" 00:40:59.023 }, 00:40:59.023 "ns_data": { 00:40:59.023 "id": 1, 00:40:59.023 "can_share": false 00:40:59.023 } 00:40:59.023 } 00:40:59.023 ], 00:40:59.023 "mp_policy": "active_passive" 00:40:59.023 } 00:40:59.023 } 00:40:59.023 ]' 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:59.023 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:40:59.283 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=797a22e6-4fa1-488c-9a3d-f30535e7f51c 00:40:59.283 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:40:59.283 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 797a22e6-4fa1-488c-9a3d-f30535e7f51c 00:40:59.543 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:40:59.543 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2741d7c8-3ad6-4b09-ae73-8a1810354733 00:40:59.543 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2741d7c8-3ad6-4b09-ae73-8a1810354733 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=746fc276-6573-4544-bbd0-4c40fc242394 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 746fc276-6573-4544-bbd0-4c40fc242394 ]] 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 746fc276-6573-4544-bbd0-4c40fc242394 5120 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=746fc276-6573-4544-bbd0-4c40fc242394 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 746fc276-6573-4544-bbd0-4c40fc242394 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=746fc276-6573-4544-bbd0-4c40fc242394 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:59.802 05:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 746fc276-6573-4544-bbd0-4c40fc242394 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:00.063 { 00:41:00.063 "name": "746fc276-6573-4544-bbd0-4c40fc242394", 00:41:00.063 "aliases": [ 00:41:00.063 "lvs/basen1p0" 00:41:00.063 ], 00:41:00.063 "product_name": "Logical Volume", 00:41:00.063 "block_size": 4096, 00:41:00.063 "num_blocks": 5242880, 00:41:00.063 "uuid": "746fc276-6573-4544-bbd0-4c40fc242394", 00:41:00.063 "assigned_rate_limits": { 00:41:00.063 "rw_ios_per_sec": 0, 00:41:00.063 "rw_mbytes_per_sec": 0, 00:41:00.063 "r_mbytes_per_sec": 0, 00:41:00.063 "w_mbytes_per_sec": 0 00:41:00.063 }, 00:41:00.063 "claimed": false, 00:41:00.063 "zoned": false, 00:41:00.063 "supported_io_types": { 00:41:00.063 "read": true, 00:41:00.063 "write": true, 00:41:00.063 "unmap": true, 00:41:00.063 "flush": false, 00:41:00.063 "reset": true, 00:41:00.063 "nvme_admin": false, 00:41:00.063 "nvme_io": false, 00:41:00.063 "nvme_io_md": false, 00:41:00.063 "write_zeroes": true, 00:41:00.063 "zcopy": false, 00:41:00.063 "get_zone_info": false, 00:41:00.063 "zone_management": false, 00:41:00.063 "zone_append": false, 00:41:00.063 "compare": false, 00:41:00.063 "compare_and_write": false, 00:41:00.063 "abort": false, 00:41:00.063 "seek_hole": true, 00:41:00.063 "seek_data": true, 00:41:00.063 "copy": false, 00:41:00.063 "nvme_iov_md": false 00:41:00.063 }, 00:41:00.063 "driver_specific": { 00:41:00.063 "lvol": { 00:41:00.063 "lvol_store_uuid": "2741d7c8-3ad6-4b09-ae73-8a1810354733", 00:41:00.063 "base_bdev": "basen1", 00:41:00.063 "thin_provision": true, 00:41:00.063 "num_allocated_clusters": 0, 00:41:00.063 "snapshot": false, 00:41:00.063 "clone": false, 00:41:00.063 "esnap_clone": false 00:41:00.063 } 00:41:00.063 } 00:41:00.063 } 00:41:00.063 ]' 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:41:00.063 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:41:00.324 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:41:00.324 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:41:00.324 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:41:00.585 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:41:00.585 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:41:00.585 05:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 746fc276-6573-4544-bbd0-4c40fc242394 -c cachen1p0 --l2p_dram_limit 2 00:41:00.845 [2024-11-27 05:03:07.864124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.845 [2024-11-27 05:03:07.864188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:41:00.845 [2024-11-27 05:03:07.864207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:00.845 [2024-11-27 05:03:07.864217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.845 [2024-11-27 05:03:07.864297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.845 [2024-11-27 05:03:07.864308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:00.845 [2024-11-27 05:03:07.864320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:41:00.845 [2024-11-27 05:03:07.864328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.845 [2024-11-27 05:03:07.864351] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:41:00.845 [2024-11-27 05:03:07.865183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:41:00.845 [2024-11-27 05:03:07.865210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.865219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:00.846 [2024-11-27 05:03:07.865231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:41:00.846 [2024-11-27 05:03:07.865239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.865280] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a2acda3b-d569-4b5e-82be-d717fca2bca3 00:41:00.846 [2024-11-27 05:03:07.867137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.867413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:41:00.846 [2024-11-27 05:03:07.867447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:41:00.846 [2024-11-27 05:03:07.867462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.876505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.876557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:00.846 [2024-11-27 05:03:07.876569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.912 ms 00:41:00.846 [2024-11-27 05:03:07.876579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.876627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.876638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:00.846 [2024-11-27 05:03:07.876647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:41:00.846 [2024-11-27 05:03:07.876660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.876708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.876721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:41:00.846 [2024-11-27 05:03:07.876731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:41:00.846 [2024-11-27 05:03:07.876743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.876768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:41:00.846 [2024-11-27 05:03:07.881115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.881155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:00.846 [2024-11-27 05:03:07.881170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.352 ms 00:41:00.846 [2024-11-27 05:03:07.881179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.881213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.881228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:41:00.846 [2024-11-27 05:03:07.881239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:00.846 [2024-11-27 05:03:07.881247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.881299] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:41:00.846 [2024-11-27 05:03:07.881468] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:41:00.846 [2024-11-27 05:03:07.881492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:41:00.846 [2024-11-27 05:03:07.881509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:41:00.846 [2024-11-27 05:03:07.881526] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:41:00.846 [2024-11-27 05:03:07.881535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:41:00.846 [2024-11-27 05:03:07.881546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:41:00.846 [2024-11-27 05:03:07.881556] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:41:00.846 [2024-11-27 05:03:07.881566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:41:00.846 [2024-11-27 05:03:07.881574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:41:00.846 [2024-11-27 05:03:07.881585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.881592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:41:00.846 [2024-11-27 05:03:07.881602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:41:00.846 [2024-11-27 05:03:07.881610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.881698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.846 [2024-11-27 05:03:07.881714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:41:00.846 [2024-11-27 05:03:07.881727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:41:00.846 [2024-11-27 05:03:07.881734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.846 [2024-11-27 05:03:07.881847] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:41:00.846 [2024-11-27 05:03:07.881861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:41:00.846 [2024-11-27 05:03:07.881876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:00.846 [2024-11-27 05:03:07.881888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.881903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:41:00.846 [2024-11-27 05:03:07.881914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.881927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:41:00.846 [2024-11-27 05:03:07.881940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:41:00.846 [2024-11-27 05:03:07.881949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:41:00.846 [2024-11-27 05:03:07.881955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.881964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:41:00.846 [2024-11-27 05:03:07.881976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:41:00.846 [2024-11-27 05:03:07.881987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.881999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:41:00.846 [2024-11-27 05:03:07.882008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:41:00.846 [2024-11-27 05:03:07.882015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:41:00.846 [2024-11-27 05:03:07.882035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:41:00.846 [2024-11-27 05:03:07.882043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:41:00.846 [2024-11-27 05:03:07.882059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:41:00.846 [2024-11-27 05:03:07.882112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:41:00.846 [2024-11-27 05:03:07.882136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:41:00.846 [2024-11-27 05:03:07.882160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:41:00.846 [2024-11-27 05:03:07.882187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:41:00.846 [2024-11-27 05:03:07.882210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:41:00.846 [2024-11-27 05:03:07.882233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:41:00.846 [2024-11-27 05:03:07.882255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:41:00.846 [2024-11-27 05:03:07.882265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882271] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:41:00.846 [2024-11-27 05:03:07.882285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:41:00.846 [2024-11-27 05:03:07.882299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:00.846 [2024-11-27 05:03:07.882320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:41:00.846 [2024-11-27 05:03:07.882336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:41:00.846 [2024-11-27 05:03:07.882347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:41:00.846 [2024-11-27 05:03:07.882361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:41:00.846 [2024-11-27 05:03:07.882371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:41:00.846 [2024-11-27 05:03:07.882384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:41:00.846 [2024-11-27 05:03:07.882402] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:41:00.846 [2024-11-27 05:03:07.882423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:41:00.847 [2024-11-27 05:03:07.882443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:41:00.847 [2024-11-27 05:03:07.882477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:41:00.847 [2024-11-27 05:03:07.882491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:41:00.847 [2024-11-27 05:03:07.882503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:41:00.847 [2024-11-27 05:03:07.882518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:41:00.847 [2024-11-27 05:03:07.882599] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:41:00.847 [2024-11-27 05:03:07.882615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:00.847 [2024-11-27 05:03:07.882642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:41:00.847 [2024-11-27 05:03:07.882653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:41:00.847 [2024-11-27 05:03:07.882670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:41:00.847 [2024-11-27 05:03:07.882682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:00.847 [2024-11-27 05:03:07.882696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:41:00.847 [2024-11-27 05:03:07.882709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:41:00.847 [2024-11-27 05:03:07.882719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:00.847 [2024-11-27 05:03:07.882770] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:41:00.847 [2024-11-27 05:03:07.882790] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:41:05.045 [2024-11-27 05:03:11.485406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.485473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:41:05.045 [2024-11-27 05:03:11.485490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3602.621 ms 00:41:05.045 [2024-11-27 05:03:11.485502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.513634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.513824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:05.045 [2024-11-27 05:03:11.513845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.914 ms 00:41:05.045 [2024-11-27 05:03:11.513855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.513935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.513948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:41:05.045 [2024-11-27 05:03:11.513957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:41:05.045 [2024-11-27 05:03:11.513973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.547859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.547911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:05.045 [2024-11-27 05:03:11.547924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.834 ms 00:41:05.045 [2024-11-27 05:03:11.547936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.547975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.547986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:05.045 [2024-11-27 05:03:11.547995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:41:05.045 [2024-11-27 05:03:11.548005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.548614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.548673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:05.045 [2024-11-27 05:03:11.548692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:41:05.045 [2024-11-27 05:03:11.548702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.548750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.548764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:05.045 [2024-11-27 05:03:11.548772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:41:05.045 [2024-11-27 05:03:11.548784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.565853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.565901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:05.045 [2024-11-27 05:03:11.565913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.048 ms 00:41:05.045 [2024-11-27 05:03:11.565923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.594390] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:41:05.045 [2024-11-27 05:03:11.595644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.595808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:41:05.045 [2024-11-27 05:03:11.595833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.633 ms 00:41:05.045 [2024-11-27 05:03:11.595841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.623203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.623250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:41:05.045 [2024-11-27 05:03:11.623267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.315 ms 00:41:05.045 [2024-11-27 05:03:11.623277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.623386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.623397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:41:05.045 [2024-11-27 05:03:11.623412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:41:05.045 [2024-11-27 05:03:11.623421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.648452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.648499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:41:05.045 [2024-11-27 05:03:11.648515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.974 ms 00:41:05.045 [2024-11-27 05:03:11.648523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.673461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.673507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:41:05.045 [2024-11-27 05:03:11.673521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.881 ms 00:41:05.045 [2024-11-27 05:03:11.673529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.674142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.674161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:41:05.045 [2024-11-27 05:03:11.674176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:41:05.045 [2024-11-27 05:03:11.674184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.754729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.754924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:41:05.045 [2024-11-27 05:03:11.754956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 80.498 ms 00:41:05.045 [2024-11-27 05:03:11.754966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.781624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.781808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:41:05.045 [2024-11-27 05:03:11.781834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.525 ms 00:41:05.045 [2024-11-27 05:03:11.781843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.045 [2024-11-27 05:03:11.807609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.045 [2024-11-27 05:03:11.807654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:41:05.046 [2024-11-27 05:03:11.807668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.687 ms 00:41:05.046 [2024-11-27 05:03:11.807676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.046 [2024-11-27 05:03:11.833146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.046 [2024-11-27 05:03:11.833190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:41:05.046 [2024-11-27 05:03:11.833205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.421 ms 00:41:05.046 [2024-11-27 05:03:11.833213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.046 [2024-11-27 05:03:11.833266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.046 [2024-11-27 05:03:11.833275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:41:05.046 [2024-11-27 05:03:11.833289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:41:05.046 [2024-11-27 05:03:11.833298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.046 [2024-11-27 05:03:11.833405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:05.046 [2024-11-27 05:03:11.833419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:41:05.046 [2024-11-27 05:03:11.833430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:41:05.046 [2024-11-27 05:03:11.833439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:05.046 [2024-11-27 05:03:11.834582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3969.977 ms, result 0 00:41:05.046 { 00:41:05.046 "name": "ftl", 00:41:05.046 "uuid": "a2acda3b-d569-4b5e-82be-d717fca2bca3" 00:41:05.046 } 00:41:05.046 05:03:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:41:05.046 [2024-11-27 05:03:12.061742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.046 05:03:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:41:05.308 05:03:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:41:05.308 [2024-11-27 05:03:12.490194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:41:05.570 05:03:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:41:05.570 [2024-11-27 05:03:12.699418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:05.570 05:03:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:06.143 Fill FTL, iteration 1 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82981 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82981 /var/tmp/spdk.tgt.sock 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82981 ']' 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:41:06.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:06.143 05:03:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:06.143 [2024-11-27 05:03:13.151061] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:06.143 [2024-11-27 05:03:13.151462] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82981 ] 00:41:06.143 [2024-11-27 05:03:13.312554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.405 [2024-11-27 05:03:13.466659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.345 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.345 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:41:07.345 05:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:41:07.346 ftln1 00:41:07.346 05:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:41:07.346 05:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82981 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82981 ']' 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82981 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82981 00:41:07.603 killing process with pid 82981 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82981' 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82981 00:41:07.603 05:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82981 00:41:08.977 05:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:41:08.977 05:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:41:08.977 [2024-11-27 05:03:16.146248] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:08.977 [2024-11-27 05:03:16.146490] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83023 ] 00:41:09.236 [2024-11-27 05:03:16.300212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.236 [2024-11-27 05:03:16.387273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:10.615  [2024-11-27T05:03:18.762Z] Copying: 253/1024 [MB] (253 MBps) [2024-11-27T05:03:20.149Z] Copying: 491/1024 [MB] (238 MBps) [2024-11-27T05:03:20.722Z] Copying: 730/1024 [MB] (239 MBps) [2024-11-27T05:03:20.984Z] Copying: 966/1024 [MB] (236 MBps) [2024-11-27T05:03:21.921Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:41:14.718 00:41:14.718 Calculate MD5 checksum, iteration 1 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:14.718 05:03:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:14.718 [2024-11-27 05:03:21.660467] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:14.718 [2024-11-27 05:03:21.660580] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83085 ] 00:41:14.718 [2024-11-27 05:03:21.820228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.977 [2024-11-27 05:03:21.921289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:16.394  [2024-11-27T05:03:24.171Z] Copying: 624/1024 [MB] (624 MBps) [2024-11-27T05:03:24.743Z] Copying: 1024/1024 [MB] (average 620 MBps) 00:41:17.540 00:41:17.540 05:03:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:41:17.540 05:03:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:19.496 Fill FTL, iteration 2 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=989ae7097a5cbd0c369e5c8a75320fde 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:19.496 05:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:19.496 [2024-11-27 05:03:26.463553] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:19.496 [2024-11-27 05:03:26.463665] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83141 ] 00:41:19.496 [2024-11-27 05:03:26.619721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.758 [2024-11-27 05:03:26.706297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:21.140  [2024-11-27T05:03:29.283Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-27T05:03:30.220Z] Copying: 455/1024 [MB] (230 MBps) [2024-11-27T05:03:31.160Z] Copying: 714/1024 [MB] (259 MBps) [2024-11-27T05:03:31.419Z] Copying: 954/1024 [MB] (240 MBps) [2024-11-27T05:03:31.984Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:41:24.781 00:41:24.781 Calculate MD5 checksum, iteration 2 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:24.781 05:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:24.781 [2024-11-27 05:03:31.969223] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:24.781 [2024-11-27 05:03:31.969353] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83199 ] 00:41:25.039 [2024-11-27 05:03:32.123756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.039 [2024-11-27 05:03:32.204961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.952  [2024-11-27T05:03:34.415Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-27T05:03:35.356Z] Copying: 1024/1024 [MB] (average 636 MBps) 00:41:28.153 00:41:28.153 05:03:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:41:28.153 05:03:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6a5b20f08d7d92f18674dd58bdfee051 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:30.695 [2024-11-27 05:03:37.587344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.695 [2024-11-27 05:03:37.587383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:30.695 [2024-11-27 05:03:37.587395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:41:30.695 [2024-11-27 05:03:37.587402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.695 [2024-11-27 05:03:37.587420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.695 [2024-11-27 05:03:37.587429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:30.695 [2024-11-27 05:03:37.587436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:30.695 [2024-11-27 05:03:37.587442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.695 [2024-11-27 05:03:37.587456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.695 [2024-11-27 05:03:37.587463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:30.695 [2024-11-27 05:03:37.587469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:30.695 [2024-11-27 05:03:37.587475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.695 [2024-11-27 05:03:37.587522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:41:30.695 true 00:41:30.695 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:30.695 { 00:41:30.695 "name": "ftl", 00:41:30.695 "properties": [ 00:41:30.695 { 00:41:30.695 "name": "superblock_version", 00:41:30.695 "value": 5, 00:41:30.695 "read-only": true 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "name": "base_device", 00:41:30.695 "bands": [ 00:41:30.695 { 00:41:30.695 "id": 0, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 1, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 2, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 3, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 4, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 5, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 6, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 7, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 8, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 9, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 10, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 11, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.695 }, 00:41:30.695 { 00:41:30.695 "id": 12, 00:41:30.695 "state": "FREE", 00:41:30.695 "validity": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 13, 00:41:30.696 "state": "FREE", 00:41:30.696 "validity": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 14, 00:41:30.696 "state": "FREE", 00:41:30.696 "validity": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 15, 00:41:30.696 "state": "FREE", 00:41:30.696 "validity": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 16, 00:41:30.696 "state": "FREE", 00:41:30.696 "validity": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 17, 00:41:30.696 "state": "FREE", 00:41:30.696 "validity": 0.0 00:41:30.696 } 00:41:30.696 ], 00:41:30.696 "read-only": true 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "name": "cache_device", 00:41:30.696 "type": "bdev", 00:41:30.696 "chunks": [ 00:41:30.696 { 00:41:30.696 "id": 0, 00:41:30.696 "state": "INACTIVE", 00:41:30.696 "utilization": 0.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 1, 00:41:30.696 "state": "CLOSED", 00:41:30.696 "utilization": 1.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 2, 00:41:30.696 "state": "CLOSED", 00:41:30.696 "utilization": 1.0 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 3, 00:41:30.696 "state": "OPEN", 00:41:30.696 "utilization": 0.001953125 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "id": 4, 00:41:30.696 "state": "OPEN", 00:41:30.696 "utilization": 0.0 00:41:30.696 } 00:41:30.696 ], 00:41:30.696 "read-only": true 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "name": "verbose_mode", 00:41:30.696 "value": true, 00:41:30.696 "unit": "", 00:41:30.696 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:30.696 }, 00:41:30.696 { 00:41:30.696 "name": "prep_upgrade_on_shutdown", 00:41:30.696 "value": false, 00:41:30.696 "unit": "", 00:41:30.696 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:30.696 } 00:41:30.696 ] 00:41:30.696 } 00:41:30.696 05:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:41:30.954 [2024-11-27 05:03:37.999691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.954 [2024-11-27 05:03:37.999726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:30.954 [2024-11-27 05:03:37.999735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:41:30.954 [2024-11-27 05:03:37.999741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.954 [2024-11-27 05:03:37.999757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.954 [2024-11-27 05:03:37.999763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:30.954 [2024-11-27 05:03:37.999769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:30.954 [2024-11-27 05:03:37.999775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.954 [2024-11-27 05:03:37.999789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:30.954 [2024-11-27 05:03:37.999795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:30.954 [2024-11-27 05:03:37.999801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:30.954 [2024-11-27 05:03:37.999807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:30.954 [2024-11-27 05:03:37.999850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.149 ms, result 0 00:41:30.954 true 00:41:30.954 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:41:30.954 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:30.954 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:41:31.212 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:41:31.212 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:41:31.212 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:31.212 [2024-11-27 05:03:38.411997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:31.212 [2024-11-27 05:03:38.412031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:31.212 [2024-11-27 05:03:38.412040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:31.212 [2024-11-27 05:03:38.412045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:31.212 [2024-11-27 05:03:38.412062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:31.212 [2024-11-27 05:03:38.412081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:31.212 [2024-11-27 05:03:38.412087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:31.212 [2024-11-27 05:03:38.412093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:31.212 [2024-11-27 05:03:38.412107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:31.212 [2024-11-27 05:03:38.412114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:31.212 [2024-11-27 05:03:38.412120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:31.212 [2024-11-27 05:03:38.412125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:31.212 [2024-11-27 05:03:38.412167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:41:31.470 true 00:41:31.470 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:31.470 { 00:41:31.470 "name": "ftl", 00:41:31.470 "properties": [ 00:41:31.470 { 00:41:31.470 "name": "superblock_version", 00:41:31.470 "value": 5, 00:41:31.470 "read-only": true 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "name": "base_device", 00:41:31.470 "bands": [ 00:41:31.470 { 00:41:31.470 "id": 0, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 1, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 2, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 3, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 4, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 5, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 6, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 7, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 8, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 9, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 10, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 11, 00:41:31.470 "state": "FREE", 00:41:31.470 "validity": 0.0 00:41:31.470 }, 00:41:31.470 { 00:41:31.470 "id": 12, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 13, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 14, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 15, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 16, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 17, 00:41:31.471 "state": "FREE", 00:41:31.471 "validity": 0.0 00:41:31.471 } 00:41:31.471 ], 00:41:31.471 "read-only": true 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "name": "cache_device", 00:41:31.471 "type": "bdev", 00:41:31.471 "chunks": [ 00:41:31.471 { 00:41:31.471 "id": 0, 00:41:31.471 "state": "INACTIVE", 00:41:31.471 "utilization": 0.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 1, 00:41:31.471 "state": "CLOSED", 00:41:31.471 "utilization": 1.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 2, 00:41:31.471 "state": "CLOSED", 00:41:31.471 "utilization": 1.0 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 3, 00:41:31.471 "state": "OPEN", 00:41:31.471 "utilization": 0.001953125 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "id": 4, 00:41:31.471 "state": "OPEN", 00:41:31.471 "utilization": 0.0 00:41:31.471 } 00:41:31.471 ], 00:41:31.471 "read-only": true 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "name": "verbose_mode", 00:41:31.471 "value": true, 00:41:31.471 "unit": "", 00:41:31.471 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:31.471 }, 00:41:31.471 { 00:41:31.471 "name": "prep_upgrade_on_shutdown", 00:41:31.471 "value": true, 00:41:31.471 "unit": "", 00:41:31.471 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:31.471 } 00:41:31.471 ] 00:41:31.471 } 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82858 ]] 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82858 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82858 ']' 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82858 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:31.471 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82858 00:41:31.729 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:31.729 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:31.729 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82858' 00:41:31.729 killing process with pid 82858 00:41:31.729 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82858 00:41:31.729 05:03:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82858 00:41:32.298 [2024-11-27 05:03:39.199275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:41:32.298 [2024-11-27 05:03:39.209399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:32.299 [2024-11-27 05:03:39.209432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:41:32.299 [2024-11-27 05:03:39.209442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:32.299 [2024-11-27 05:03:39.209448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:32.299 [2024-11-27 05:03:39.209465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:41:32.299 [2024-11-27 05:03:39.211510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:32.299 [2024-11-27 05:03:39.211534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:41:32.299 [2024-11-27 05:03:39.211542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.034 ms 00:41:32.299 [2024-11-27 05:03:39.211549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.055569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.055767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:41:40.427 [2024-11-27 05:03:47.055789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7843.967 ms 00:41:40.427 [2024-11-27 05:03:47.055795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.056718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.056732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:41:40.427 [2024-11-27 05:03:47.056740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.909 ms 00:41:40.427 [2024-11-27 05:03:47.056746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.057629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.057643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:41:40.427 [2024-11-27 05:03:47.057650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:41:40.427 [2024-11-27 05:03:47.057659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.065040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.065078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:41:40.427 [2024-11-27 05:03:47.065086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:41:40.427 [2024-11-27 05:03:47.065092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.070777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.070882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:41:40.427 [2024-11-27 05:03:47.070895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.660 ms 00:41:40.427 [2024-11-27 05:03:47.070901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.070955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.070966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:41:40.427 [2024-11-27 05:03:47.070973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:41:40.427 [2024-11-27 05:03:47.070979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.077725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.077817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:41:40.427 [2024-11-27 05:03:47.077829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.735 ms 00:41:40.427 [2024-11-27 05:03:47.077834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.427 [2024-11-27 05:03:47.085163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.427 [2024-11-27 05:03:47.085189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:41:40.428 [2024-11-27 05:03:47.085196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.305 ms 00:41:40.428 [2024-11-27 05:03:47.085201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.092183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.428 [2024-11-27 05:03:47.092273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:41:40.428 [2024-11-27 05:03:47.092284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.958 ms 00:41:40.428 [2024-11-27 05:03:47.092289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.099167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.428 [2024-11-27 05:03:47.099257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:41:40.428 [2024-11-27 05:03:47.099268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.834 ms 00:41:40.428 [2024-11-27 05:03:47.099273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.099295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:41:40.428 [2024-11-27 05:03:47.099312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:41:40.428 [2024-11-27 05:03:47.099320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:41:40.428 [2024-11-27 05:03:47.099326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:41:40.428 [2024-11-27 05:03:47.099332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:40.428 [2024-11-27 05:03:47.099420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:41:40.428 [2024-11-27 05:03:47.099426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a2acda3b-d569-4b5e-82be-d717fca2bca3 00:41:40.428 [2024-11-27 05:03:47.099432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:41:40.428 [2024-11-27 05:03:47.099437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:41:40.428 [2024-11-27 05:03:47.099443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:41:40.428 [2024-11-27 05:03:47.099448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:41:40.428 [2024-11-27 05:03:47.099456] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:41:40.428 [2024-11-27 05:03:47.099461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:41:40.428 [2024-11-27 05:03:47.099469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:41:40.428 [2024-11-27 05:03:47.099474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:41:40.428 [2024-11-27 05:03:47.099479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:41:40.428 [2024-11-27 05:03:47.099485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.428 [2024-11-27 05:03:47.099491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:41:40.428 [2024-11-27 05:03:47.099499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:41:40.428 [2024-11-27 05:03:47.099505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.109058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.428 [2024-11-27 05:03:47.109088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:41:40.428 [2024-11-27 05:03:47.109100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.541 ms 00:41:40.428 [2024-11-27 05:03:47.109106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.109374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:40.428 [2024-11-27 05:03:47.109385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:41:40.428 [2024-11-27 05:03:47.109392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:41:40.428 [2024-11-27 05:03:47.109397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.142143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.142248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:40.428 [2024-11-27 05:03:47.142259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.142265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.142287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.142294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:40.428 [2024-11-27 05:03:47.142300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.142306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.142353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.142360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:40.428 [2024-11-27 05:03:47.142370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.142376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.142388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.142394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:40.428 [2024-11-27 05:03:47.142400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.142406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.200947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.200982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:40.428 [2024-11-27 05:03:47.200995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.201001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.248758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.248891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:40.428 [2024-11-27 05:03:47.248903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.248910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.248977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.248985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:40.428 [2024-11-27 05:03:47.248991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.249041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:40.428 [2024-11-27 05:03:47.249047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.249152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:40.428 [2024-11-27 05:03:47.249158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.249200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:41:40.428 [2024-11-27 05:03:47.249206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.249245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:40.428 [2024-11-27 05:03:47.249252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:40.428 [2024-11-27 05:03:47.249300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:40.428 [2024-11-27 05:03:47.249306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:40.428 [2024-11-27 05:03:47.249311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:40.428 [2024-11-27 05:03:47.249412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8039.955 ms, result 0 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83386 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:41:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83386 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83386 ']' 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:45.716 05:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:45.717 [2024-11-27 05:03:52.227669] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:45.717 [2024-11-27 05:03:52.227796] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83386 ] 00:41:45.717 [2024-11-27 05:03:52.387420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.717 [2024-11-27 05:03:52.474043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.977 [2024-11-27 05:03:53.044462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:45.977 [2024-11-27 05:03:53.044516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:46.240 [2024-11-27 05:03:53.192007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.240 [2024-11-27 05:03:53.192059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:41:46.240 [2024-11-27 05:03:53.192089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:46.240 [2024-11-27 05:03:53.192098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.240 [2024-11-27 05:03:53.192160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.240 [2024-11-27 05:03:53.192171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:46.240 [2024-11-27 05:03:53.192179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:41:46.240 [2024-11-27 05:03:53.192186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.240 [2024-11-27 05:03:53.192208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:41:46.240 [2024-11-27 05:03:53.192888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:41:46.240 [2024-11-27 05:03:53.192911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.240 [2024-11-27 05:03:53.192918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:46.240 [2024-11-27 05:03:53.192927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.707 ms 00:41:46.240 [2024-11-27 05:03:53.192933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.240 [2024-11-27 05:03:53.194196] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:41:46.240 [2024-11-27 05:03:53.207454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.240 [2024-11-27 05:03:53.207497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:41:46.240 [2024-11-27 05:03:53.207509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.260 ms 00:41:46.240 [2024-11-27 05:03:53.207517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.240 [2024-11-27 05:03:53.207579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.207588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:41:46.241 [2024-11-27 05:03:53.207597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:41:46.241 [2024-11-27 05:03:53.207604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.213452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.213487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:46.241 [2024-11-27 05:03:53.213497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.772 ms 00:41:46.241 [2024-11-27 05:03:53.213505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.213561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.213571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:46.241 [2024-11-27 05:03:53.213579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:41:46.241 [2024-11-27 05:03:53.213586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.213626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.213638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:41:46.241 [2024-11-27 05:03:53.213647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:41:46.241 [2024-11-27 05:03:53.213654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.213674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:41:46.241 [2024-11-27 05:03:53.217086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.217120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:46.241 [2024-11-27 05:03:53.217133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.413 ms 00:41:46.241 [2024-11-27 05:03:53.217141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.217170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.217178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:41:46.241 [2024-11-27 05:03:53.217186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:41:46.241 [2024-11-27 05:03:53.217193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.217213] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:41:46.241 [2024-11-27 05:03:53.217234] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:41:46.241 [2024-11-27 05:03:53.217268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:41:46.241 [2024-11-27 05:03:53.217283] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:41:46.241 [2024-11-27 05:03:53.217396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:41:46.241 [2024-11-27 05:03:53.217406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:41:46.241 [2024-11-27 05:03:53.217416] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:41:46.241 [2024-11-27 05:03:53.217427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217447] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:41:46.241 [2024-11-27 05:03:53.217453] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:41:46.241 [2024-11-27 05:03:53.217461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:41:46.241 [2024-11-27 05:03:53.217468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:41:46.241 [2024-11-27 05:03:53.217476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.217483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:41:46.241 [2024-11-27 05:03:53.217490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:41:46.241 [2024-11-27 05:03:53.217497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.217582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.241 [2024-11-27 05:03:53.217590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:41:46.241 [2024-11-27 05:03:53.217599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:41:46.241 [2024-11-27 05:03:53.217606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.241 [2024-11-27 05:03:53.217740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:41:46.241 [2024-11-27 05:03:53.217751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:41:46.241 [2024-11-27 05:03:53.217759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:41:46.241 [2024-11-27 05:03:53.217780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:41:46.241 [2024-11-27 05:03:53.217795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:41:46.241 [2024-11-27 05:03:53.217802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:41:46.241 [2024-11-27 05:03:53.217809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:41:46.241 [2024-11-27 05:03:53.217821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:41:46.241 [2024-11-27 05:03:53.217828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:41:46.241 [2024-11-27 05:03:53.217843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:41:46.241 [2024-11-27 05:03:53.217849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:41:46.241 [2024-11-27 05:03:53.217862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:41:46.241 [2024-11-27 05:03:53.217868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:41:46.241 [2024-11-27 05:03:53.217881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:41:46.241 [2024-11-27 05:03:53.217887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:41:46.241 [2024-11-27 05:03:53.217907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:41:46.241 [2024-11-27 05:03:53.217913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:41:46.241 [2024-11-27 05:03:53.217926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:41:46.241 [2024-11-27 05:03:53.217932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:41:46.241 [2024-11-27 05:03:53.217945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:41:46.241 [2024-11-27 05:03:53.217951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:41:46.241 [2024-11-27 05:03:53.217963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:41:46.241 [2024-11-27 05:03:53.217970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:41:46.241 [2024-11-27 05:03:53.217983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:41:46.241 [2024-11-27 05:03:53.217989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.241 [2024-11-27 05:03:53.217996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:41:46.242 [2024-11-27 05:03:53.218002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:41:46.242 [2024-11-27 05:03:53.218008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.242 [2024-11-27 05:03:53.218015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:41:46.242 [2024-11-27 05:03:53.218021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:41:46.242 [2024-11-27 05:03:53.218027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.242 [2024-11-27 05:03:53.218033] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:41:46.242 [2024-11-27 05:03:53.218041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:41:46.242 [2024-11-27 05:03:53.218053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:46.242 [2024-11-27 05:03:53.218081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:46.242 [2024-11-27 05:03:53.218089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:41:46.242 [2024-11-27 05:03:53.218097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:41:46.242 [2024-11-27 05:03:53.218103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:41:46.242 [2024-11-27 05:03:53.218110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:41:46.242 [2024-11-27 05:03:53.218117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:41:46.242 [2024-11-27 05:03:53.218123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:41:46.242 [2024-11-27 05:03:53.218131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:41:46.242 [2024-11-27 05:03:53.218140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:41:46.242 [2024-11-27 05:03:53.218156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:41:46.242 [2024-11-27 05:03:53.218178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:41:46.242 [2024-11-27 05:03:53.218185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:41:46.242 [2024-11-27 05:03:53.218192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:41:46.242 [2024-11-27 05:03:53.218205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:41:46.242 [2024-11-27 05:03:53.218253] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:41:46.242 [2024-11-27 05:03:53.218261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:46.242 [2024-11-27 05:03:53.218275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:41:46.242 [2024-11-27 05:03:53.218282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:41:46.242 [2024-11-27 05:03:53.218289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:41:46.242 [2024-11-27 05:03:53.218296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:46.242 [2024-11-27 05:03:53.218302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:41:46.242 [2024-11-27 05:03:53.218313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.624 ms 00:41:46.242 [2024-11-27 05:03:53.218320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:46.242 [2024-11-27 05:03:53.218361] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:41:46.242 [2024-11-27 05:03:53.218373] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:41:49.545 [2024-11-27 05:03:56.628947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.629008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:41:49.545 [2024-11-27 05:03:56.629026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3410.570 ms 00:41:49.545 [2024-11-27 05:03:56.629036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.657822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.657876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:49.545 [2024-11-27 05:03:56.657888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.538 ms 00:41:49.545 [2024-11-27 05:03:56.657897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.657992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.658003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:41:49.545 [2024-11-27 05:03:56.658013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:41:49.545 [2024-11-27 05:03:56.658021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.691202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.691264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:49.545 [2024-11-27 05:03:56.691282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.142 ms 00:41:49.545 [2024-11-27 05:03:56.691290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.691324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.691333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:49.545 [2024-11-27 05:03:56.691342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:49.545 [2024-11-27 05:03:56.691350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.691874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.691896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:49.545 [2024-11-27 05:03:56.691906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:41:49.545 [2024-11-27 05:03:56.691920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.691965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.691974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:49.545 [2024-11-27 05:03:56.691983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:41:49.545 [2024-11-27 05:03:56.691990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.709054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.709118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:49.545 [2024-11-27 05:03:56.709130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.038 ms 00:41:49.545 [2024-11-27 05:03:56.709139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.545 [2024-11-27 05:03:56.736112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:41:49.545 [2024-11-27 05:03:56.736178] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:41:49.545 [2024-11-27 05:03:56.736194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.545 [2024-11-27 05:03:56.736204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:41:49.545 [2024-11-27 05:03:56.736214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.894 ms 00:41:49.545 [2024-11-27 05:03:56.736223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.751940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.806 [2024-11-27 05:03:56.752177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:41:49.806 [2024-11-27 05:03:56.752203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.649 ms 00:41:49.806 [2024-11-27 05:03:56.752214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.765732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.806 [2024-11-27 05:03:56.765787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:41:49.806 [2024-11-27 05:03:56.765799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.460 ms 00:41:49.806 [2024-11-27 05:03:56.765807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.778978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.806 [2024-11-27 05:03:56.779030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:41:49.806 [2024-11-27 05:03:56.779042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.111 ms 00:41:49.806 [2024-11-27 05:03:56.779051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.779766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.806 [2024-11-27 05:03:56.779805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:41:49.806 [2024-11-27 05:03:56.779818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:41:49.806 [2024-11-27 05:03:56.779826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.849590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.806 [2024-11-27 05:03:56.849678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:41:49.806 [2024-11-27 05:03:56.849702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.737 ms 00:41:49.806 [2024-11-27 05:03:56.849716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.806 [2024-11-27 05:03:56.861674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:41:49.807 [2024-11-27 05:03:56.862950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.863003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:41:49.807 [2024-11-27 05:03:56.863016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.162 ms 00:41:49.807 [2024-11-27 05:03:56.863025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.863161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.863179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:41:49.807 [2024-11-27 05:03:56.863189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:41:49.807 [2024-11-27 05:03:56.863198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.863262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.863274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:41:49.807 [2024-11-27 05:03:56.863283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:41:49.807 [2024-11-27 05:03:56.863291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.863317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.863327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:41:49.807 [2024-11-27 05:03:56.863339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:41:49.807 [2024-11-27 05:03:56.863347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.863380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:41:49.807 [2024-11-27 05:03:56.863391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.863399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:41:49.807 [2024-11-27 05:03:56.863407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:41:49.807 [2024-11-27 05:03:56.863415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.890565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.890628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:41:49.807 [2024-11-27 05:03:56.890641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.125 ms 00:41:49.807 [2024-11-27 05:03:56.890650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.890751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:49.807 [2024-11-27 05:03:56.890761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:41:49.807 [2024-11-27 05:03:56.890772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:41:49.807 [2024-11-27 05:03:56.890780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:49.807 [2024-11-27 05:03:56.892125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3699.568 ms, result 0 00:41:49.807 [2024-11-27 05:03:56.907011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.807 [2024-11-27 05:03:56.923027] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:41:49.807 [2024-11-27 05:03:56.931307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:49.807 05:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:49.807 05:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:41:49.807 05:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:49.807 05:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:41:49.807 05:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:50.066 [2024-11-27 05:03:57.175324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:50.067 [2024-11-27 05:03:57.175386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:50.067 [2024-11-27 05:03:57.175407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:41:50.067 [2024-11-27 05:03:57.175416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:50.067 [2024-11-27 05:03:57.175441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:50.067 [2024-11-27 05:03:57.175450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:50.067 [2024-11-27 05:03:57.175460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:50.067 [2024-11-27 05:03:57.175467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:50.067 [2024-11-27 05:03:57.175489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:50.067 [2024-11-27 05:03:57.175498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:50.067 [2024-11-27 05:03:57.175506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:50.067 [2024-11-27 05:03:57.175514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:50.067 [2024-11-27 05:03:57.175580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.247 ms, result 0 00:41:50.067 true 00:41:50.067 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:50.327 { 00:41:50.327 "name": "ftl", 00:41:50.327 "properties": [ 00:41:50.327 { 00:41:50.327 "name": "superblock_version", 00:41:50.327 "value": 5, 00:41:50.327 "read-only": true 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "name": "base_device", 00:41:50.327 "bands": [ 00:41:50.327 { 00:41:50.327 "id": 0, 00:41:50.327 "state": "CLOSED", 00:41:50.327 "validity": 1.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 1, 00:41:50.327 "state": "CLOSED", 00:41:50.327 "validity": 1.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 2, 00:41:50.327 "state": "CLOSED", 00:41:50.327 "validity": 0.007843137254901933 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 3, 00:41:50.327 "state": "FREE", 00:41:50.327 "validity": 0.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 4, 00:41:50.327 "state": "FREE", 00:41:50.327 "validity": 0.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 5, 00:41:50.327 "state": "FREE", 00:41:50.327 "validity": 0.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.327 "id": 6, 00:41:50.327 "state": "FREE", 00:41:50.327 "validity": 0.0 00:41:50.327 }, 00:41:50.327 { 00:41:50.328 "id": 7, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 8, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 9, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 10, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 11, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 12, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 13, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 14, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 15, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 16, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 17, 00:41:50.328 "state": "FREE", 00:41:50.328 "validity": 0.0 00:41:50.328 } 00:41:50.328 ], 00:41:50.328 "read-only": true 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "name": "cache_device", 00:41:50.328 "type": "bdev", 00:41:50.328 "chunks": [ 00:41:50.328 { 00:41:50.328 "id": 0, 00:41:50.328 "state": "INACTIVE", 00:41:50.328 "utilization": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 1, 00:41:50.328 "state": "OPEN", 00:41:50.328 "utilization": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 2, 00:41:50.328 "state": "OPEN", 00:41:50.328 "utilization": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 3, 00:41:50.328 "state": "FREE", 00:41:50.328 "utilization": 0.0 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "id": 4, 00:41:50.328 "state": "FREE", 00:41:50.328 "utilization": 0.0 00:41:50.328 } 00:41:50.328 ], 00:41:50.328 "read-only": true 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "name": "verbose_mode", 00:41:50.328 "value": true, 00:41:50.328 "unit": "", 00:41:50.328 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:50.328 }, 00:41:50.328 { 00:41:50.328 "name": "prep_upgrade_on_shutdown", 00:41:50.328 "value": false, 00:41:50.328 "unit": "", 00:41:50.328 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:50.328 } 00:41:50.328 ] 00:41:50.328 } 00:41:50.328 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:41:50.328 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:41:50.328 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:50.590 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:41:50.590 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:41:50.590 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:41:50.590 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:41:50.590 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:50.851 Validate MD5 checksum, iteration 1 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:50.851 05:03:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:50.851 [2024-11-27 05:03:57.872131] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:50.851 [2024-11-27 05:03:57.872565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83461 ] 00:41:50.851 [2024-11-27 05:03:58.044288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.112 [2024-11-27 05:03:58.184800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:53.027  [2024-11-27T05:04:00.812Z] Copying: 525/1024 [MB] (525 MBps) [2024-11-27T05:04:01.753Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:41:54.550 00:41:54.550 05:04:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:41:54.550 05:04:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=989ae7097a5cbd0c369e5c8a75320fde 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 989ae7097a5cbd0c369e5c8a75320fde != \9\8\9\a\e\7\0\9\7\a\5\c\b\d\0\c\3\6\9\e\5\c\8\a\7\5\3\2\0\f\d\e ]] 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:57.168 Validate MD5 checksum, iteration 2 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:57.168 05:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:57.168 [2024-11-27 05:04:03.896865] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:41:57.168 [2024-11-27 05:04:03.897477] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83528 ] 00:41:57.168 [2024-11-27 05:04:04.056843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.168 [2024-11-27 05:04:04.161016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:58.550  [2024-11-27T05:04:06.695Z] Copying: 536/1024 [MB] (536 MBps) [2024-11-27T05:04:07.266Z] Copying: 1024/1024 [MB] (average 590 MBps) 00:42:00.063 00:42:00.324 05:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:42:00.324 05:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:02.865 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:02.865 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6a5b20f08d7d92f18674dd58bdfee051 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6a5b20f08d7d92f18674dd58bdfee051 != \6\a\5\b\2\0\f\0\8\d\7\d\9\2\f\1\8\6\7\4\d\d\5\8\b\d\f\e\e\0\5\1 ]] 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83386 ]] 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83386 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83589 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83589 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83589 ']' 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:02.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:02.866 05:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:02.866 [2024-11-27 05:04:09.596475] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:42:02.866 [2024-11-27 05:04:09.596964] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83589 ] 00:42:02.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83386 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:42:02.866 [2024-11-27 05:04:09.758206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.866 [2024-11-27 05:04:09.844794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.433 [2024-11-27 05:04:10.418610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:03.433 [2024-11-27 05:04:10.418666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:03.433 [2024-11-27 05:04:10.561532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.561562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:42:03.433 [2024-11-27 05:04:10.561573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:42:03.433 [2024-11-27 05:04:10.561579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.433 [2024-11-27 05:04:10.561621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.561629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:03.433 [2024-11-27 05:04:10.561636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:42:03.433 [2024-11-27 05:04:10.561641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.433 [2024-11-27 05:04:10.561655] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:42:03.433 [2024-11-27 05:04:10.562203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:42:03.433 [2024-11-27 05:04:10.562216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.562222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:03.433 [2024-11-27 05:04:10.562228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:42:03.433 [2024-11-27 05:04:10.562234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.433 [2024-11-27 05:04:10.562482] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:42:03.433 [2024-11-27 05:04:10.574645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.574671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:42:03.433 [2024-11-27 05:04:10.574680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.164 ms 00:42:03.433 [2024-11-27 05:04:10.574687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.433 [2024-11-27 05:04:10.581285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.581310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:42:03.433 [2024-11-27 05:04:10.581318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:42:03.433 [2024-11-27 05:04:10.581323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.433 [2024-11-27 05:04:10.581571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.433 [2024-11-27 05:04:10.581580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:03.434 [2024-11-27 05:04:10.581586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:42:03.434 [2024-11-27 05:04:10.581592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.581634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.581641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:03.434 [2024-11-27 05:04:10.581647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:42:03.434 [2024-11-27 05:04:10.581653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.581670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.581676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:42:03.434 [2024-11-27 05:04:10.581682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:03.434 [2024-11-27 05:04:10.581688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.581702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:42:03.434 [2024-11-27 05:04:10.583959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.583980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:03.434 [2024-11-27 05:04:10.583987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.260 ms 00:42:03.434 [2024-11-27 05:04:10.583994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.584012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.584018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:42:03.434 [2024-11-27 05:04:10.584024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:42:03.434 [2024-11-27 05:04:10.584029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.584045] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:42:03.434 [2024-11-27 05:04:10.584059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:42:03.434 [2024-11-27 05:04:10.584093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:42:03.434 [2024-11-27 05:04:10.584106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:42:03.434 [2024-11-27 05:04:10.584185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:42:03.434 [2024-11-27 05:04:10.584192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:42:03.434 [2024-11-27 05:04:10.584201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:42:03.434 [2024-11-27 05:04:10.584208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584214] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584220] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:42:03.434 [2024-11-27 05:04:10.584225] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:42:03.434 [2024-11-27 05:04:10.584230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:42:03.434 [2024-11-27 05:04:10.584236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:42:03.434 [2024-11-27 05:04:10.584244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.584250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:42:03.434 [2024-11-27 05:04:10.584255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:42:03.434 [2024-11-27 05:04:10.584260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.584325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.434 [2024-11-27 05:04:10.584336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:42:03.434 [2024-11-27 05:04:10.584342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:42:03.434 [2024-11-27 05:04:10.584347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.434 [2024-11-27 05:04:10.584421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:42:03.434 [2024-11-27 05:04:10.584430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:42:03.434 [2024-11-27 05:04:10.584436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:42:03.434 [2024-11-27 05:04:10.584453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:42:03.434 [2024-11-27 05:04:10.584466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:42:03.434 [2024-11-27 05:04:10.584472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:42:03.434 [2024-11-27 05:04:10.584476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:42:03.434 [2024-11-27 05:04:10.584487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:42:03.434 [2024-11-27 05:04:10.584491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:42:03.434 [2024-11-27 05:04:10.584502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:42:03.434 [2024-11-27 05:04:10.584507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:42:03.434 [2024-11-27 05:04:10.584517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:42:03.434 [2024-11-27 05:04:10.584522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:42:03.434 [2024-11-27 05:04:10.584532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:42:03.434 [2024-11-27 05:04:10.584541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:42:03.434 [2024-11-27 05:04:10.584552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:42:03.434 [2024-11-27 05:04:10.584556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:42:03.434 [2024-11-27 05:04:10.584566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:42:03.434 [2024-11-27 05:04:10.584571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:42:03.434 [2024-11-27 05:04:10.584582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:42:03.434 [2024-11-27 05:04:10.584586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:42:03.434 [2024-11-27 05:04:10.584596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:42:03.434 [2024-11-27 05:04:10.584601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:42:03.434 [2024-11-27 05:04:10.584612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:42:03.434 [2024-11-27 05:04:10.584616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:42:03.434 [2024-11-27 05:04:10.584627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:42:03.434 [2024-11-27 05:04:10.584632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.435 [2024-11-27 05:04:10.584637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:42:03.435 [2024-11-27 05:04:10.584642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:42:03.435 [2024-11-27 05:04:10.584647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.435 [2024-11-27 05:04:10.584652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:42:03.435 [2024-11-27 05:04:10.584658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:42:03.435 [2024-11-27 05:04:10.584664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:03.435 [2024-11-27 05:04:10.584669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:03.435 [2024-11-27 05:04:10.584676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:42:03.435 [2024-11-27 05:04:10.584681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:42:03.435 [2024-11-27 05:04:10.584686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:42:03.435 [2024-11-27 05:04:10.584691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:42:03.435 [2024-11-27 05:04:10.584696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:42:03.435 [2024-11-27 05:04:10.584701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:42:03.435 [2024-11-27 05:04:10.584707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:42:03.435 [2024-11-27 05:04:10.584714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:42:03.435 [2024-11-27 05:04:10.584725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:42:03.435 [2024-11-27 05:04:10.584741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:42:03.435 [2024-11-27 05:04:10.584746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:42:03.435 [2024-11-27 05:04:10.584751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:42:03.435 [2024-11-27 05:04:10.584757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:42:03.435 [2024-11-27 05:04:10.584793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:42:03.435 [2024-11-27 05:04:10.584800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:03.435 [2024-11-27 05:04:10.584813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:42:03.435 [2024-11-27 05:04:10.584819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:42:03.435 [2024-11-27 05:04:10.584824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:42:03.435 [2024-11-27 05:04:10.584829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.584835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:42:03.435 [2024-11-27 05:04:10.584841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:42:03.435 [2024-11-27 05:04:10.584846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.603760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.603783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:03.435 [2024-11-27 05:04:10.603792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.878 ms 00:42:03.435 [2024-11-27 05:04:10.603798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.603824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.603831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:42:03.435 [2024-11-27 05:04:10.603837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:42:03.435 [2024-11-27 05:04:10.603843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.627609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.627641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:03.435 [2024-11-27 05:04:10.627649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.727 ms 00:42:03.435 [2024-11-27 05:04:10.627655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.627674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.627681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:03.435 [2024-11-27 05:04:10.627687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:42:03.435 [2024-11-27 05:04:10.627694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.627764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.627772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:03.435 [2024-11-27 05:04:10.627778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:42:03.435 [2024-11-27 05:04:10.627784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.435 [2024-11-27 05:04:10.627812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.435 [2024-11-27 05:04:10.627818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:03.435 [2024-11-27 05:04:10.627824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:42:03.435 [2024-11-27 05:04:10.627829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.639358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.639381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:03.694 [2024-11-27 05:04:10.639388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.509 ms 00:42:03.694 [2024-11-27 05:04:10.639396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.639468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.639476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:42:03.694 [2024-11-27 05:04:10.639482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:42:03.694 [2024-11-27 05:04:10.639488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.663657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.663695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:42:03.694 [2024-11-27 05:04:10.663707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.155 ms 00:42:03.694 [2024-11-27 05:04:10.663715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.672099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.672125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:42:03.694 [2024-11-27 05:04:10.672133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:42:03.694 [2024-11-27 05:04:10.672139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.715336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.715377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:42:03.694 [2024-11-27 05:04:10.715386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.156 ms 00:42:03.694 [2024-11-27 05:04:10.715393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.715498] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:42:03.694 [2024-11-27 05:04:10.715574] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:42:03.694 [2024-11-27 05:04:10.715649] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:42:03.694 [2024-11-27 05:04:10.715722] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:42:03.694 [2024-11-27 05:04:10.715729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.715735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:42:03.694 [2024-11-27 05:04:10.715742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:42:03.694 [2024-11-27 05:04:10.715747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.715791] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:42:03.694 [2024-11-27 05:04:10.715799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.715808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:42:03.694 [2024-11-27 05:04:10.715814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:42:03.694 [2024-11-27 05:04:10.715820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.727881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.727909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:42:03.694 [2024-11-27 05:04:10.727917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.045 ms 00:42:03.694 [2024-11-27 05:04:10.727923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.734329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.734361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:42:03.694 [2024-11-27 05:04:10.734370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:42:03.694 [2024-11-27 05:04:10.734376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:03.694 [2024-11-27 05:04:10.734440] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:42:03.694 [2024-11-27 05:04:10.734557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:03.694 [2024-11-27 05:04:10.734571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:42:03.694 [2024-11-27 05:04:10.734578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 00:42:03.694 [2024-11-27 05:04:10.734583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.264 [2024-11-27 05:04:11.236291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.264 [2024-11-27 05:04:11.236370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:04.264 [2024-11-27 05:04:11.236388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 501.050 ms 00:42:04.264 [2024-11-27 05:04:11.236398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.264 [2024-11-27 05:04:11.241000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.264 [2024-11-27 05:04:11.241036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:04.264 [2024-11-27 05:04:11.241047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.441 ms 00:42:04.264 [2024-11-27 05:04:11.241076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.264 [2024-11-27 05:04:11.241728] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:42:04.264 [2024-11-27 05:04:11.241764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.264 [2024-11-27 05:04:11.241774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:04.264 [2024-11-27 05:04:11.241785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.656 ms 00:42:04.264 [2024-11-27 05:04:11.241794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.264 [2024-11-27 05:04:11.241828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.264 [2024-11-27 05:04:11.241839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:04.264 [2024-11-27 05:04:11.241849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:04.264 [2024-11-27 05:04:11.241863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.264 [2024-11-27 05:04:11.241899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 507.458 ms, result 0 00:42:04.264 [2024-11-27 05:04:11.241940] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:42:04.264 [2024-11-27 05:04:11.242031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.264 [2024-11-27 05:04:11.242043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:42:04.264 [2024-11-27 05:04:11.242052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:42:04.264 [2024-11-27 05:04:11.242060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.833 [2024-11-27 05:04:11.833169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.833 [2024-11-27 05:04:11.833231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:04.833 [2024-11-27 05:04:11.833258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 589.986 ms 00:42:04.833 [2024-11-27 05:04:11.833266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.833 [2024-11-27 05:04:11.837758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.837789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:04.834 [2024-11-27 05:04:11.837798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.515 ms 00:42:04.834 [2024-11-27 05:04:11.837806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.838251] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:42:04.834 [2024-11-27 05:04:11.838276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.838284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:04.834 [2024-11-27 05:04:11.838293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.444 ms 00:42:04.834 [2024-11-27 05:04:11.838301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.838347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.838356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:04.834 [2024-11-27 05:04:11.838364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:42:04.834 [2024-11-27 05:04:11.838371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.838406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 596.461 ms, result 0 00:42:04.834 [2024-11-27 05:04:11.838447] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:04.834 [2024-11-27 05:04:11.838462] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:42:04.834 [2024-11-27 05:04:11.838472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.838480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:42:04.834 [2024-11-27 05:04:11.838488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1104.048 ms 00:42:04.834 [2024-11-27 05:04:11.838495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.838525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.838536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:42:04.834 [2024-11-27 05:04:11.838544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:42:04.834 [2024-11-27 05:04:11.838553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.849756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:42:04.834 [2024-11-27 05:04:11.849857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.849866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:42:04.834 [2024-11-27 05:04:11.849875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.288 ms 00:42:04.834 [2024-11-27 05:04:11.849883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.850575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.850597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:42:04.834 [2024-11-27 05:04:11.850606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:42:04.834 [2024-11-27 05:04:11.850614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.852859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.852878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:42:04.834 [2024-11-27 05:04:11.852887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.230 ms 00:42:04.834 [2024-11-27 05:04:11.852895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.852933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.852941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:42:04.834 [2024-11-27 05:04:11.852952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:42:04.834 [2024-11-27 05:04:11.852959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.853062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.853082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:42:04.834 [2024-11-27 05:04:11.853090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:42:04.834 [2024-11-27 05:04:11.853097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.853118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.853126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:42:04.834 [2024-11-27 05:04:11.853134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:04.834 [2024-11-27 05:04:11.853142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.853172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:42:04.834 [2024-11-27 05:04:11.853182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.853189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:42:04.834 [2024-11-27 05:04:11.853197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:42:04.834 [2024-11-27 05:04:11.853205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.853256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:04.834 [2024-11-27 05:04:11.853265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:42:04.834 [2024-11-27 05:04:11.853273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:42:04.834 [2024-11-27 05:04:11.853280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:04.834 [2024-11-27 05:04:11.854172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1292.188 ms, result 0 00:42:04.834 [2024-11-27 05:04:11.866561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:04.834 [2024-11-27 05:04:11.882540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:42:04.834 [2024-11-27 05:04:11.890666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:05.092 Validate MD5 checksum, iteration 1 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:05.092 05:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:05.092 [2024-11-27 05:04:12.162174] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:42:05.092 [2024-11-27 05:04:12.162288] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83624 ] 00:42:05.350 [2024-11-27 05:04:12.323639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:05.350 [2024-11-27 05:04:12.429334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:07.257  [2024-11-27T05:04:14.720Z] Copying: 555/1024 [MB] (555 MBps) [2024-11-27T05:04:20.008Z] Copying: 1024/1024 [MB] (average 585 MBps) 00:42:12.806 00:42:12.806 05:04:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:42:12.806 05:04:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:14.722 Validate MD5 checksum, iteration 2 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=989ae7097a5cbd0c369e5c8a75320fde 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 989ae7097a5cbd0c369e5c8a75320fde != \9\8\9\a\e\7\0\9\7\a\5\c\b\d\0\c\3\6\9\e\5\c\8\a\7\5\3\2\0\f\d\e ]] 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:14.722 05:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:14.722 [2024-11-27 05:04:21.535827] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:42:14.722 [2024-11-27 05:04:21.535917] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83724 ] 00:42:14.722 [2024-11-27 05:04:21.689200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.722 [2024-11-27 05:04:21.792895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.631  [2024-11-27T05:04:24.095Z] Copying: 655/1024 [MB] (655 MBps) [2024-11-27T05:04:26.644Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:42:19.441 00:42:19.441 05:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:42:19.441 05:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6a5b20f08d7d92f18674dd58bdfee051 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6a5b20f08d7d92f18674dd58bdfee051 != \6\a\5\b\2\0\f\0\8\d\7\d\9\2\f\1\8\6\7\4\d\d\5\8\b\d\f\e\e\0\5\1 ]] 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:42:21.347 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83589 ]] 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83589 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83589 ']' 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83589 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83589 00:42:21.606 killing process with pid 83589 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83589' 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83589 00:42:21.606 05:04:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83589 00:42:22.173 [2024-11-27 05:04:29.156676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:42:22.173 [2024-11-27 05:04:29.167369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.167404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:42:22.173 [2024-11-27 05:04:29.167414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:42:22.173 [2024-11-27 05:04:29.167421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.167438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:42:22.173 [2024-11-27 05:04:29.169540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.169564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:42:22.173 [2024-11-27 05:04:29.169576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.091 ms 00:42:22.173 [2024-11-27 05:04:29.169582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.169776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.169783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:42:22.173 [2024-11-27 05:04:29.169790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:42:22.173 [2024-11-27 05:04:29.169795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.170850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.170975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:42:22.173 [2024-11-27 05:04:29.170986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:42:22.173 [2024-11-27 05:04:29.170996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.171867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.171880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:42:22.173 [2024-11-27 05:04:29.171889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:42:22.173 [2024-11-27 05:04:29.171894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.179024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.179050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:42:22.173 [2024-11-27 05:04:29.179057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.103 ms 00:42:22.173 [2024-11-27 05:04:29.179077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.182992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.183019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:42:22.173 [2024-11-27 05:04:29.183028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.887 ms 00:42:22.173 [2024-11-27 05:04:29.183034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.183107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.183116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:42:22.173 [2024-11-27 05:04:29.183122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:42:22.173 [2024-11-27 05:04:29.183132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.190220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.190244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:42:22.173 [2024-11-27 05:04:29.190251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.068 ms 00:42:22.173 [2024-11-27 05:04:29.190257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.197227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.197331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:42:22.173 [2024-11-27 05:04:29.197343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.945 ms 00:42:22.173 [2024-11-27 05:04:29.197349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.204449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.204544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:42:22.173 [2024-11-27 05:04:29.204556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.076 ms 00:42:22.173 [2024-11-27 05:04:29.204562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.211412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.173 [2024-11-27 05:04:29.211504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:42:22.173 [2024-11-27 05:04:29.211515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.798 ms 00:42:22.173 [2024-11-27 05:04:29.211520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.173 [2024-11-27 05:04:29.211542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:42:22.173 [2024-11-27 05:04:29.211553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:42:22.173 [2024-11-27 05:04:29.211561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:42:22.173 [2024-11-27 05:04:29.211567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:42:22.173 [2024-11-27 05:04:29.211573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:22.173 [2024-11-27 05:04:29.211613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:22.174 [2024-11-27 05:04:29.211659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:42:22.174 [2024-11-27 05:04:29.211665] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a2acda3b-d569-4b5e-82be-d717fca2bca3 00:42:22.174 [2024-11-27 05:04:29.211671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:42:22.174 [2024-11-27 05:04:29.211676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:42:22.174 [2024-11-27 05:04:29.211681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:42:22.174 [2024-11-27 05:04:29.211687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:42:22.174 [2024-11-27 05:04:29.211692] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:42:22.174 [2024-11-27 05:04:29.211697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:42:22.174 [2024-11-27 05:04:29.211703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:42:22.174 [2024-11-27 05:04:29.211708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:42:22.174 [2024-11-27 05:04:29.211713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:42:22.174 [2024-11-27 05:04:29.211718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.174 [2024-11-27 05:04:29.211728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:42:22.174 [2024-11-27 05:04:29.211734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:42:22.174 [2024-11-27 05:04:29.211742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.221285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.174 [2024-11-27 05:04:29.221309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:42:22.174 [2024-11-27 05:04:29.221316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.530 ms 00:42:22.174 [2024-11-27 05:04:29.221322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.221602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:22.174 [2024-11-27 05:04:29.221613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:42:22.174 [2024-11-27 05:04:29.221619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:42:22.174 [2024-11-27 05:04:29.221624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.254874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.254972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:22.174 [2024-11-27 05:04:29.254984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.254990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.255017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.255023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:22.174 [2024-11-27 05:04:29.255029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.255035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.255096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.255104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:22.174 [2024-11-27 05:04:29.255111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.255116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.255133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.255139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:22.174 [2024-11-27 05:04:29.255145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.255151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.314667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.314698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:22.174 [2024-11-27 05:04:29.314707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.314713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:22.174 [2024-11-27 05:04:29.363660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:22.174 [2024-11-27 05:04:29.363744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:22.174 [2024-11-27 05:04:29.363805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:22.174 [2024-11-27 05:04:29.363892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:42:22.174 [2024-11-27 05:04:29.363936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.363969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.363976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:22.174 [2024-11-27 05:04:29.363982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.363987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.364021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:22.174 [2024-11-27 05:04:29.364028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:22.174 [2024-11-27 05:04:29.364035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:22.174 [2024-11-27 05:04:29.364041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:22.174 [2024-11-27 05:04:29.364143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.753 ms, result 0 00:42:23.109 05:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:42:23.109 05:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:23.109 Remove shared memory files 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83386 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:42:23.109 ************************************ 00:42:23.109 END TEST ftl_upgrade_shutdown 00:42:23.109 ************************************ 00:42:23.109 00:42:23.109 real 1m25.681s 00:42:23.109 user 1m56.620s 00:42:23.109 sys 0m20.254s 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:23.109 05:04:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:23.109 Process with pid 75075 is not found 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@14 -- # killprocess 75075 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 75075 ']' 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@958 -- # kill -0 75075 00:42:23.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75075) - No such process 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75075 is not found' 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83844 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83844 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@835 -- # '[' -z 83844 ']' 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.109 05:04:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:23.109 05:04:30 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:23.109 [2024-11-27 05:04:30.150010] Starting SPDK v25.01-pre git sha1 78decfef6 / DPDK 24.03.0 initialization... 00:42:23.109 [2024-11-27 05:04:30.150155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83844 ] 00:42:23.109 [2024-11-27 05:04:30.306568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.368 [2024-11-27 05:04:30.381898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.935 05:04:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:23.935 05:04:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:42:23.935 05:04:30 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:24.193 nvme0n1 00:42:24.193 05:04:31 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:42:24.193 05:04:31 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:24.193 05:04:31 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:24.451 05:04:31 ftl -- ftl/common.sh@28 -- # stores=2741d7c8-3ad6-4b09-ae73-8a1810354733 00:42:24.452 05:04:31 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:42:24.452 05:04:31 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2741d7c8-3ad6-4b09-ae73-8a1810354733 00:42:24.710 05:04:31 ftl -- ftl/ftl.sh@23 -- # killprocess 83844 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 83844 ']' 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@958 -- # kill -0 83844 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@959 -- # uname 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83844 00:42:24.710 killing process with pid 83844 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83844' 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@973 -- # kill 83844 00:42:24.710 05:04:31 ftl -- common/autotest_common.sh@978 -- # wait 83844 00:42:25.647 05:04:32 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:25.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:25.907 Waiting for block devices as requested 00:42:26.169 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:42:26.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:26.169 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:42:26.430 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:42:31.839 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:42:31.839 05:04:38 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:42:31.839 Remove shared memory files 00:42:31.839 05:04:38 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:31.839 05:04:38 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:42:31.839 05:04:38 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:42:31.839 05:04:38 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:42:31.839 05:04:38 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:31.839 05:04:38 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:42:31.839 ************************************ 00:42:31.839 END TEST ftl 00:42:31.839 ************************************ 00:42:31.839 00:42:31.839 real 13m5.872s 00:42:31.839 user 15m16.707s 00:42:31.839 sys 1m14.633s 00:42:31.839 05:04:38 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.839 05:04:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:31.839 05:04:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:31.839 05:04:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:31.839 05:04:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:31.839 05:04:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:31.839 05:04:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:31.839 05:04:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:31.839 05:04:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:31.839 05:04:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:31.839 05:04:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:31.839 05:04:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:31.839 05:04:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:31.839 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:42:31.839 05:04:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:31.839 05:04:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:31.839 05:04:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:31.839 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:42:33.224 INFO: APP EXITING 00:42:33.224 INFO: killing all VMs 00:42:33.224 INFO: killing vhost app 00:42:33.224 INFO: EXIT DONE 00:42:33.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:33.748 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:42:33.748 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:42:33.748 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:42:33.748 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:42:34.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:34.582 Cleaning 00:42:34.582 Removing: /var/run/dpdk/spdk0/config 00:42:34.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:34.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:34.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:34.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:34.582 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:34.582 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:34.582 Removing: /var/run/dpdk/spdk0 00:42:34.582 Removing: /var/run/dpdk/spdk_pid56939 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57141 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57359 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57452 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57497 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57620 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57638 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57831 00:42:34.582 Removing: /var/run/dpdk/spdk_pid57924 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58020 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58131 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58223 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58268 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58299 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58375 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58481 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58912 00:42:34.582 Removing: /var/run/dpdk/spdk_pid58970 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59033 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59049 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59151 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59167 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59258 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59274 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59333 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59351 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59404 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59422 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59582 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59618 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59702 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59874 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59958 00:42:34.582 Removing: /var/run/dpdk/spdk_pid59989 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60437 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60529 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60642 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60695 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60726 00:42:34.582 Removing: /var/run/dpdk/spdk_pid60799 00:42:34.582 Removing: /var/run/dpdk/spdk_pid61429 00:42:34.582 Removing: /var/run/dpdk/spdk_pid61471 00:42:34.582 Removing: /var/run/dpdk/spdk_pid61952 00:42:34.844 Removing: /var/run/dpdk/spdk_pid62050 00:42:34.844 Removing: /var/run/dpdk/spdk_pid62165 00:42:34.844 Removing: /var/run/dpdk/spdk_pid62218 00:42:34.844 Removing: /var/run/dpdk/spdk_pid62238 00:42:34.844 Removing: /var/run/dpdk/spdk_pid62269 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64103 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64235 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64239 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64256 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64304 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64308 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64320 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64365 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64369 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64381 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64426 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64430 00:42:34.844 Removing: /var/run/dpdk/spdk_pid64442 00:42:34.844 Removing: /var/run/dpdk/spdk_pid65829 00:42:34.844 Removing: /var/run/dpdk/spdk_pid65926 00:42:34.844 Removing: /var/run/dpdk/spdk_pid67325 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69066 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69134 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69214 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69317 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69414 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69510 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69584 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69665 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69769 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69861 00:42:34.844 Removing: /var/run/dpdk/spdk_pid69962 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70025 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70106 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70210 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70302 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70392 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70466 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70536 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70640 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70738 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70834 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70898 00:42:34.844 Removing: /var/run/dpdk/spdk_pid70978 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71052 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71126 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71224 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71321 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71416 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71479 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71553 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71632 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71702 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71805 00:42:34.844 Removing: /var/run/dpdk/spdk_pid71896 00:42:34.844 Removing: /var/run/dpdk/spdk_pid72045 00:42:34.844 Removing: /var/run/dpdk/spdk_pid72324 00:42:34.844 Removing: /var/run/dpdk/spdk_pid72362 00:42:34.844 Removing: /var/run/dpdk/spdk_pid72822 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73009 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73102 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73212 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73269 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73289 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73600 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73662 00:42:34.844 Removing: /var/run/dpdk/spdk_pid73736 00:42:34.844 Removing: /var/run/dpdk/spdk_pid74135 00:42:34.844 Removing: /var/run/dpdk/spdk_pid74270 00:42:34.844 Removing: /var/run/dpdk/spdk_pid75075 00:42:34.844 Removing: /var/run/dpdk/spdk_pid75208 00:42:34.844 Removing: /var/run/dpdk/spdk_pid75390 00:42:34.844 Removing: /var/run/dpdk/spdk_pid75486 00:42:34.844 Removing: /var/run/dpdk/spdk_pid75794 00:42:34.844 Removing: /var/run/dpdk/spdk_pid76069 00:42:34.844 Removing: /var/run/dpdk/spdk_pid76416 00:42:34.844 Removing: /var/run/dpdk/spdk_pid76598 00:42:34.844 Removing: /var/run/dpdk/spdk_pid76812 00:42:34.844 Removing: /var/run/dpdk/spdk_pid76870 00:42:34.844 Removing: /var/run/dpdk/spdk_pid77030 00:42:34.844 Removing: /var/run/dpdk/spdk_pid77058 00:42:34.844 Removing: /var/run/dpdk/spdk_pid77112 00:42:34.844 Removing: /var/run/dpdk/spdk_pid77350 00:42:34.844 Removing: /var/run/dpdk/spdk_pid77577 00:42:34.844 Removing: /var/run/dpdk/spdk_pid78307 00:42:34.844 Removing: /var/run/dpdk/spdk_pid78987 00:42:34.844 Removing: /var/run/dpdk/spdk_pid79482 00:42:34.844 Removing: /var/run/dpdk/spdk_pid80302 00:42:34.844 Removing: /var/run/dpdk/spdk_pid80444 00:42:34.844 Removing: /var/run/dpdk/spdk_pid80528 00:42:34.844 Removing: /var/run/dpdk/spdk_pid81012 00:42:34.844 Removing: /var/run/dpdk/spdk_pid81069 00:42:34.844 Removing: /var/run/dpdk/spdk_pid81538 00:42:34.844 Removing: /var/run/dpdk/spdk_pid82035 00:42:34.844 Removing: /var/run/dpdk/spdk_pid82858 00:42:34.844 Removing: /var/run/dpdk/spdk_pid82981 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83023 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83085 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83141 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83199 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83386 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83461 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83528 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83589 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83624 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83724 00:42:35.106 Removing: /var/run/dpdk/spdk_pid83844 00:42:35.106 Clean 00:42:35.106 05:04:42 -- common/autotest_common.sh@1453 -- # return 0 00:42:35.106 05:04:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:35.106 05:04:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:35.106 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:42:35.106 05:04:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:35.106 05:04:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:35.106 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:42:35.106 05:04:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:35.106 05:04:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:42:35.106 05:04:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:42:35.106 05:04:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:35.106 05:04:42 -- spdk/autotest.sh@398 -- # hostname 00:42:35.106 05:04:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:42:35.368 geninfo: WARNING: invalid characters removed from testname! 00:43:02.006 05:05:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:04.552 05:05:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:07.101 05:05:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:09.016 05:05:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:10.919 05:05:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:13.462 05:05:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:15.368 05:05:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:15.368 05:05:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:15.368 05:05:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:43:15.368 05:05:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:15.368 05:05:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:15.368 05:05:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:15.368 + [[ -n 5031 ]] 00:43:15.368 + sudo kill 5031 00:43:15.377 [Pipeline] } 00:43:15.391 [Pipeline] // timeout 00:43:15.395 [Pipeline] } 00:43:15.408 [Pipeline] // stage 00:43:15.412 [Pipeline] } 00:43:15.425 [Pipeline] // catchError 00:43:15.433 [Pipeline] stage 00:43:15.434 [Pipeline] { (Stop VM) 00:43:15.445 [Pipeline] sh 00:43:15.729 + vagrant halt 00:43:18.280 ==> default: Halting domain... 00:43:24.875 [Pipeline] sh 00:43:25.173 + vagrant destroy -f 00:43:27.161 ==> default: Removing domain... 00:43:28.118 [Pipeline] sh 00:43:28.404 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:43:28.415 [Pipeline] } 00:43:28.431 [Pipeline] // stage 00:43:28.436 [Pipeline] } 00:43:28.450 [Pipeline] // dir 00:43:28.455 [Pipeline] } 00:43:28.470 [Pipeline] // wrap 00:43:28.476 [Pipeline] } 00:43:28.489 [Pipeline] // catchError 00:43:28.499 [Pipeline] stage 00:43:28.502 [Pipeline] { (Epilogue) 00:43:28.514 [Pipeline] sh 00:43:28.801 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:34.092 [Pipeline] catchError 00:43:34.094 [Pipeline] { 00:43:34.106 [Pipeline] sh 00:43:34.391 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:34.391 Artifacts sizes are good 00:43:34.401 [Pipeline] } 00:43:34.414 [Pipeline] // catchError 00:43:34.424 [Pipeline] archiveArtifacts 00:43:34.431 Archiving artifacts 00:43:34.552 [Pipeline] cleanWs 00:43:34.564 [WS-CLEANUP] Deleting project workspace... 00:43:34.564 [WS-CLEANUP] Deferred wipeout is used... 00:43:34.572 [WS-CLEANUP] done 00:43:34.574 [Pipeline] } 00:43:34.587 [Pipeline] // stage 00:43:34.592 [Pipeline] } 00:43:34.604 [Pipeline] // node 00:43:34.609 [Pipeline] End of Pipeline 00:43:34.643 Finished: SUCCESS